00:00:00.000 Started by upstream project "autotest-per-patch" build number 124167 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.226 > git --version # 'git version 2.39.2' 00:00:00.226 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.246 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.246 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.801 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.812 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.825 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:04.825 > git config core.sparsecheckout # timeout=10 00:00:04.835 > git read-tree -mu HEAD # timeout=10 00:00:04.853 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:04.869 Commit message: "pool: fixes for VisualBuild class" 00:00:04.869 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:04.944 [Pipeline] Start of Pipeline 00:00:04.957 [Pipeline] library 00:00:04.958 Loading library shm_lib@master 00:00:04.959 Library shm_lib@master is cached. Copying from home. 00:00:04.975 [Pipeline] node 00:00:04.983 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.984 [Pipeline] { 00:00:04.993 [Pipeline] catchError 00:00:04.994 [Pipeline] { 00:00:05.003 [Pipeline] wrap 00:00:05.010 [Pipeline] { 00:00:05.015 [Pipeline] stage 00:00:05.017 [Pipeline] { (Prologue) 00:00:05.189 [Pipeline] sh 00:00:05.472 + logger -p user.info -t JENKINS-CI 00:00:05.489 [Pipeline] echo 00:00:05.490 Node: CYP9 00:00:05.494 [Pipeline] sh 00:00:05.791 [Pipeline] setCustomBuildProperty 00:00:05.802 [Pipeline] echo 00:00:05.803 Cleanup processes 00:00:05.808 [Pipeline] sh 00:00:06.090 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.090 2759485 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.101 [Pipeline] sh 00:00:06.380 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.380 ++ grep -v 'sudo pgrep' 00:00:06.380 ++ awk '{print $1}' 00:00:06.380 + sudo kill -9 00:00:06.380 + true 00:00:06.392 [Pipeline] cleanWs 00:00:06.450 [WS-CLEANUP] Deleting project workspace... 00:00:06.450 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.459 [WS-CLEANUP] done 00:00:06.462 [Pipeline] setCustomBuildProperty 00:00:06.473 [Pipeline] sh 00:00:06.753 + sudo git config --global --replace-all safe.directory '*' 00:00:06.813 [Pipeline] nodesByLabel 00:00:06.814 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.821 [Pipeline] httpRequest 00:00:06.825 HttpMethod: GET 00:00:06.826 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.829 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.847 Response Code: HTTP/1.1 200 OK 00:00:06.847 Success: Status code 200 is in the accepted range: 200,404 00:00:06.848 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:24.772 [Pipeline] sh 00:00:25.059 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:25.080 [Pipeline] httpRequest 00:00:25.085 HttpMethod: GET 00:00:25.085 URL: http://10.211.164.101/packages/spdk_5a57befde0c24aeab0694df84e45dc7e723c3b1a.tar.gz 00:00:25.086 Sending request to url: http://10.211.164.101/packages/spdk_5a57befde0c24aeab0694df84e45dc7e723c3b1a.tar.gz 00:00:25.112 Response Code: HTTP/1.1 200 OK 00:00:25.113 Success: Status code 200 is in the accepted range: 200,404 00:00:25.113 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5a57befde0c24aeab0694df84e45dc7e723c3b1a.tar.gz 00:01:53.792 [Pipeline] sh 00:01:54.080 + tar --no-same-owner -xf spdk_5a57befde0c24aeab0694df84e45dc7e723c3b1a.tar.gz 00:01:56.646 [Pipeline] sh 00:01:56.926 + git -C spdk log --oneline -n5 00:01:56.926 5a57befde test: add a test for SPDK vs kernel TLS 00:01:56.926 7fc2ab43c scripts: add a keyctl session wrapper 00:01:56.926 00058f4d0 test/nvmf/common: do not use subnqn as model 00:01:56.926 fa40728d6 test/common: continue waitforserial on grep error 00:01:56.926 70de0af3e test/nvmf/common: do not require NVMe in configure_kernel_target() 00:01:56.937 [Pipeline] } 00:01:56.953 [Pipeline] // stage 00:01:56.961 [Pipeline] stage 00:01:56.963 [Pipeline] { (Prepare) 00:01:56.979 [Pipeline] writeFile 00:01:56.995 [Pipeline] sh 00:01:57.326 + logger -p user.info -t JENKINS-CI 00:01:57.338 [Pipeline] sh 00:01:57.622 + logger -p user.info -t JENKINS-CI 00:01:57.634 [Pipeline] sh 00:01:57.916 + cat autorun-spdk.conf 00:01:57.916 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.916 SPDK_TEST_NVMF=1 00:01:57.916 SPDK_TEST_NVME_CLI=1 00:01:57.916 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.916 SPDK_TEST_NVMF_NICS=e810 00:01:57.916 SPDK_TEST_VFIOUSER=1 00:01:57.916 SPDK_RUN_UBSAN=1 00:01:57.916 NET_TYPE=phy 00:01:57.925 RUN_NIGHTLY=0 00:01:57.930 [Pipeline] readFile 00:01:57.955 [Pipeline] withEnv 00:01:57.957 [Pipeline] { 00:01:57.971 [Pipeline] sh 00:01:58.259 + set -ex 00:01:58.259 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:58.259 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:58.259 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.259 ++ SPDK_TEST_NVMF=1 00:01:58.259 ++ SPDK_TEST_NVME_CLI=1 00:01:58.259 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:58.259 ++ SPDK_TEST_NVMF_NICS=e810 00:01:58.259 ++ SPDK_TEST_VFIOUSER=1 00:01:58.259 ++ SPDK_RUN_UBSAN=1 00:01:58.259 ++ NET_TYPE=phy 00:01:58.259 ++ RUN_NIGHTLY=0 00:01:58.259 + case $SPDK_TEST_NVMF_NICS in 00:01:58.259 + DRIVERS=ice 00:01:58.259 + [[ tcp == \r\d\m\a ]] 00:01:58.259 + [[ -n ice ]] 00:01:58.259 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:58.259 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:58.259 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:58.259 rmmod: ERROR: Module irdma is not currently loaded 00:01:58.259 rmmod: ERROR: Module i40iw is not currently loaded 00:01:58.259 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:58.259 + true 00:01:58.259 + for D in $DRIVERS 00:01:58.259 + sudo modprobe ice 00:01:58.259 + exit 0 00:01:58.269 [Pipeline] } 00:01:58.286 [Pipeline] // withEnv 00:01:58.292 [Pipeline] } 00:01:58.303 [Pipeline] // stage 00:01:58.311 [Pipeline] catchError 00:01:58.313 [Pipeline] { 00:01:58.325 [Pipeline] timeout 00:01:58.325 Timeout set to expire in 50 min 00:01:58.326 [Pipeline] { 00:01:58.337 [Pipeline] stage 00:01:58.338 [Pipeline] { (Tests) 00:01:58.351 [Pipeline] sh 00:01:58.638 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.638 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.638 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.638 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:58.638 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.638 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:58.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:58.638 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:58.638 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:58.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:58.638 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:58.638 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.638 + source /etc/os-release 00:01:58.638 ++ NAME='Fedora Linux' 00:01:58.638 ++ VERSION='38 (Cloud Edition)' 00:01:58.638 ++ ID=fedora 00:01:58.638 ++ VERSION_ID=38 00:01:58.638 ++ VERSION_CODENAME= 00:01:58.638 ++ PLATFORM_ID=platform:f38 00:01:58.638 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:58.638 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.638 ++ LOGO=fedora-logo-icon 00:01:58.638 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:58.638 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.638 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:58.638 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.638 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.638 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.638 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:58.638 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.638 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:58.638 ++ SUPPORT_END=2024-05-14 00:01:58.638 ++ VARIANT='Cloud Edition' 00:01:58.638 ++ VARIANT_ID=cloud 00:01:58.638 + uname -a 00:01:58.638 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:58.638 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:01.942 Hugepages 00:02:01.942 node hugesize free / total 00:02:01.942 node0 1048576kB 0 / 0 00:02:01.942 node0 2048kB 0 / 0 00:02:01.942 node1 1048576kB 0 / 0 00:02:01.942 node1 2048kB 0 / 0 00:02:01.942 00:02:01.942 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:01.942 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:01.942 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:01.942 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:01.942 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:01.942 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:01.942 + rm -f /tmp/spdk-ld-path 00:02:01.942 + source autorun-spdk.conf 00:02:01.942 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.942 ++ SPDK_TEST_NVMF=1 00:02:01.942 ++ SPDK_TEST_NVME_CLI=1 00:02:01.942 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.942 ++ SPDK_TEST_NVMF_NICS=e810 00:02:01.942 ++ SPDK_TEST_VFIOUSER=1 00:02:01.942 ++ SPDK_RUN_UBSAN=1 00:02:01.942 ++ NET_TYPE=phy 00:02:01.942 ++ RUN_NIGHTLY=0 00:02:01.942 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:01.942 + [[ -n '' ]] 00:02:01.942 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.942 + for M in /var/spdk/build-*-manifest.txt 00:02:01.942 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:01.942 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:01.942 + for M in /var/spdk/build-*-manifest.txt 00:02:01.942 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:01.942 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:01.942 ++ uname 00:02:01.942 + [[ Linux == \L\i\n\u\x ]] 00:02:01.942 + sudo dmesg -T 00:02:01.942 + sudo dmesg --clear 00:02:01.942 + dmesg_pid=2761033 00:02:01.942 + [[ Fedora Linux == FreeBSD ]] 00:02:01.942 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.942 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.942 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:01.943 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:01.943 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:01.943 + [[ -x /usr/src/fio-static/fio ]] 00:02:01.943 + sudo dmesg -Tw 00:02:01.943 + export FIO_BIN=/usr/src/fio-static/fio 00:02:01.943 + FIO_BIN=/usr/src/fio-static/fio 00:02:01.943 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:01.943 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:01.943 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:01.943 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.943 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.943 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:01.943 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.943 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.943 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:01.943 Test configuration: 00:02:01.943 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.943 SPDK_TEST_NVMF=1 00:02:01.943 SPDK_TEST_NVME_CLI=1 00:02:01.943 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.943 SPDK_TEST_NVMF_NICS=e810 00:02:01.943 SPDK_TEST_VFIOUSER=1 00:02:01.943 SPDK_RUN_UBSAN=1 00:02:01.943 NET_TYPE=phy 00:02:01.943 RUN_NIGHTLY=0 16:10:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:01.943 16:10:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.943 16:10:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.943 16:10:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.943 16:10:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.943 16:10:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.943 16:10:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.943 16:10:28 -- paths/export.sh@5 -- $ export PATH 00:02:01.943 16:10:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.943 16:10:28 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:01.943 16:10:28 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:01.943 16:10:28 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717769428.XXXXXX 00:02:01.943 16:10:28 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717769428.ClR1fU 00:02:01.943 16:10:28 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:01.943 16:10:28 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:01.943 16:10:28 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:01.943 16:10:28 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:01.943 16:10:28 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.943 16:10:28 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:01.943 16:10:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:01.943 16:10:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.943 16:10:28 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:01.943 16:10:28 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:01.943 16:10:28 -- pm/common@17 -- $ local monitor 00:02:01.943 16:10:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.943 16:10:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.943 16:10:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.943 16:10:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.943 16:10:28 -- pm/common@21 -- $ date +%s 00:02:01.943 16:10:28 -- pm/common@21 -- $ date +%s 00:02:01.943 16:10:28 -- pm/common@25 -- $ sleep 1 00:02:01.943 16:10:28 -- pm/common@21 -- $ date +%s 00:02:01.943 16:10:28 -- pm/common@21 -- $ date +%s 00:02:01.943 16:10:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717769428 00:02:01.943 16:10:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717769428 00:02:01.943 16:10:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717769428 00:02:01.943 16:10:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717769428 00:02:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717769428_collect-vmstat.pm.log 00:02:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717769428_collect-cpu-load.pm.log 00:02:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717769428_collect-bmc-pm.bmc.pm.log 00:02:01.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717769428_collect-cpu-temp.pm.log 00:02:02.887 16:10:29 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:02.887 16:10:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.887 16:10:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.887 16:10:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.887 16:10:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.887 Fri Jun 7 02:10:29 PM UTC 2024 00:02:02.887 16:10:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.887 v24.09-pre-60-g5a57befde 00:02:02.887 16:10:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:02.887 16:10:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.887 16:10:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.887 16:10:29 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:02.887 16:10:29 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:02.887 16:10:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.887 ************************************ 00:02:02.887 START TEST ubsan 00:02:02.887 ************************************ 00:02:02.887 16:10:29 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:02.887 using ubsan 00:02:02.887 00:02:02.887 real 0m0.001s 00:02:02.887 user 0m0.000s 00:02:02.887 sys 0m0.000s 00:02:02.887 16:10:29 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:02.887 16:10:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.887 ************************************ 00:02:02.887 END TEST ubsan 00:02:02.887 ************************************ 00:02:02.887 16:10:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:02.887 16:10:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:02.887 16:10:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:02.887 16:10:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:02.887 16:10:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:02.887 16:10:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:02.887 16:10:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:02.887 16:10:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:02.887 16:10:29 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:03.147 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:03.147 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:03.408 Using 'verbs' RDMA provider 00:02:19.265 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:31.566 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:31.566 Creating mk/config.mk...done. 00:02:31.566 Creating mk/cc.flags.mk...done. 00:02:31.566 Type 'make' to build. 00:02:31.566 16:10:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:31.566 16:10:57 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:31.566 16:10:57 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:31.566 16:10:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.566 ************************************ 00:02:31.566 START TEST make 00:02:31.566 ************************************ 00:02:31.566 16:10:57 make -- common/autotest_common.sh@1124 -- $ make -j144 00:02:31.566 make[1]: Nothing to be done for 'all'. 00:02:32.508 The Meson build system 00:02:32.508 Version: 1.3.1 00:02:32.508 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:32.508 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.508 Build type: native build 00:02:32.508 Project name: libvfio-user 00:02:32.508 Project version: 0.0.1 00:02:32.508 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:32.508 C linker for the host machine: cc ld.bfd 2.39-16 00:02:32.508 Host machine cpu family: x86_64 00:02:32.508 Host machine cpu: x86_64 00:02:32.508 Run-time dependency threads found: YES 00:02:32.508 Library dl found: YES 00:02:32.508 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:32.508 Run-time dependency json-c found: YES 0.17 00:02:32.508 Run-time dependency cmocka found: YES 1.1.7 00:02:32.508 Program pytest-3 found: NO 00:02:32.508 Program flake8 found: NO 00:02:32.508 Program misspell-fixer found: NO 00:02:32.508 Program restructuredtext-lint found: NO 00:02:32.508 Program valgrind found: YES (/usr/bin/valgrind) 00:02:32.508 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.508 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.508 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.508 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:32.508 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:32.508 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:32.508 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:32.508 Build targets in project: 8 00:02:32.508 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:32.508 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:32.508 00:02:32.508 libvfio-user 0.0.1 00:02:32.508 00:02:32.508 User defined options 00:02:32.508 buildtype : debug 00:02:32.508 default_library: shared 00:02:32.508 libdir : /usr/local/lib 00:02:32.508 00:02:32.508 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.074 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:33.074 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:33.074 [2/37] Compiling C object samples/null.p/null.c.o 00:02:33.074 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:33.074 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:33.074 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:33.074 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:33.074 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:33.074 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:33.074 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:33.074 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:33.074 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:33.074 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:33.074 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:33.074 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:33.074 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:33.074 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:33.074 [17/37] Compiling C object samples/server.p/server.c.o 00:02:33.074 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:33.074 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:33.074 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:33.074 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:33.074 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:33.074 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:33.074 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:33.074 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:33.074 [26/37] Compiling C object samples/client.p/client.c.o 00:02:33.074 [27/37] Linking target samples/client 00:02:33.074 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:33.074 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:33.074 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:33.333 [31/37] Linking target test/unit_tests 00:02:33.333 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:33.333 [33/37] Linking target samples/lspci 00:02:33.333 [34/37] Linking target samples/null 00:02:33.333 [35/37] Linking target samples/gpio-pci-idio-16 00:02:33.333 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:33.333 [37/37] Linking target samples/server 00:02:33.333 INFO: autodetecting backend as ninja 00:02:33.333 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.333 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:33.592 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:33.853 ninja: no work to do. 00:02:40.448 The Meson build system 00:02:40.448 Version: 1.3.1 00:02:40.448 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:40.448 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:40.448 Build type: native build 00:02:40.448 Program cat found: YES (/usr/bin/cat) 00:02:40.448 Project name: DPDK 00:02:40.448 Project version: 24.03.0 00:02:40.448 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:40.448 C linker for the host machine: cc ld.bfd 2.39-16 00:02:40.448 Host machine cpu family: x86_64 00:02:40.448 Host machine cpu: x86_64 00:02:40.448 Message: ## Building in Developer Mode ## 00:02:40.448 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.448 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:40.448 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.448 Program python3 found: YES (/usr/bin/python3) 00:02:40.448 Program cat found: YES (/usr/bin/cat) 00:02:40.448 Compiler for C supports arguments -march=native: YES 00:02:40.448 Checking for size of "void *" : 8 00:02:40.448 Checking for size of "void *" : 8 (cached) 00:02:40.448 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:40.448 Library m found: YES 00:02:40.448 Library numa found: YES 00:02:40.448 Has header "numaif.h" : YES 00:02:40.448 Library fdt found: NO 00:02:40.448 Library execinfo found: NO 00:02:40.448 Has header "execinfo.h" : YES 00:02:40.448 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:40.448 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.448 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.448 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.448 Run-time dependency openssl found: YES 3.0.9 00:02:40.448 Run-time dependency libpcap found: YES 1.10.4 00:02:40.448 Has header "pcap.h" with dependency libpcap: YES 00:02:40.448 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.448 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.448 Compiler for C supports arguments -Wformat: YES 00:02:40.448 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.448 Compiler for C supports arguments -Wformat-security: NO 00:02:40.448 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.448 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.448 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.448 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.448 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.448 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.448 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.448 Compiler for C supports arguments -Wundef: YES 00:02:40.448 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.448 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.448 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.448 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.448 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.448 Program objdump found: YES (/usr/bin/objdump) 00:02:40.448 Compiler for C supports arguments -mavx512f: YES 00:02:40.448 Checking if "AVX512 checking" compiles: YES 00:02:40.448 Fetching value of define "__SSE4_2__" : 1 00:02:40.448 Fetching value of define "__AES__" : 1 00:02:40.448 Fetching value of define "__AVX__" : 1 00:02:40.448 Fetching value of define "__AVX2__" : 1 00:02:40.448 Fetching value of define "__AVX512BW__" : 1 00:02:40.448 Fetching value of define "__AVX512CD__" : 1 00:02:40.448 Fetching value of define "__AVX512DQ__" : 1 00:02:40.448 Fetching value of define "__AVX512F__" : 1 00:02:40.448 Fetching value of define "__AVX512VL__" : 1 00:02:40.448 Fetching value of define "__PCLMUL__" : 1 00:02:40.448 Fetching value of define "__RDRND__" : 1 00:02:40.449 Fetching value of define "__RDSEED__" : 1 00:02:40.449 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.449 Fetching value of define "__znver1__" : (undefined) 00:02:40.449 Fetching value of define "__znver2__" : (undefined) 00:02:40.449 Fetching value of define "__znver3__" : (undefined) 00:02:40.449 Fetching value of define "__znver4__" : (undefined) 00:02:40.449 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.449 Message: lib/log: Defining dependency "log" 00:02:40.449 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.449 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.449 Checking for function "getentropy" : NO 00:02:40.449 Message: lib/eal: Defining dependency "eal" 00:02:40.449 Message: lib/ring: Defining dependency "ring" 00:02:40.449 Message: lib/rcu: Defining dependency "rcu" 00:02:40.449 Message: lib/mempool: Defining dependency "mempool" 00:02:40.449 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.449 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.449 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.449 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.449 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.449 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.449 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.449 Compiler for C supports arguments -mpclmul: YES 00:02:40.449 Compiler for C supports arguments -maes: YES 00:02:40.449 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.449 Compiler for C supports arguments -mavx512bw: YES 00:02:40.449 Compiler for C supports arguments -mavx512dq: YES 00:02:40.449 Compiler for C supports arguments -mavx512vl: YES 00:02:40.449 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.449 Compiler for C supports arguments -mavx2: YES 00:02:40.449 Compiler for C supports arguments -mavx: YES 00:02:40.449 Message: lib/net: Defining dependency "net" 00:02:40.449 Message: lib/meter: Defining dependency "meter" 00:02:40.449 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.449 Message: lib/pci: Defining dependency "pci" 00:02:40.449 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.449 Message: lib/hash: Defining dependency "hash" 00:02:40.449 Message: lib/timer: Defining dependency "timer" 00:02:40.449 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.449 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.449 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.449 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.449 Message: lib/power: Defining dependency "power" 00:02:40.449 Message: lib/reorder: Defining dependency "reorder" 00:02:40.449 Message: lib/security: Defining dependency "security" 00:02:40.449 Has header "linux/userfaultfd.h" : YES 00:02:40.449 Has header "linux/vduse.h" : YES 00:02:40.449 Message: lib/vhost: Defining dependency "vhost" 00:02:40.449 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.449 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.449 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.449 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.449 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:40.449 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:40.449 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:40.449 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:40.449 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:40.449 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:40.449 Program doxygen found: YES (/usr/bin/doxygen) 00:02:40.449 Configuring doxy-api-html.conf using configuration 00:02:40.449 Configuring doxy-api-man.conf using configuration 00:02:40.449 Program mandb found: YES (/usr/bin/mandb) 00:02:40.449 Program sphinx-build found: NO 00:02:40.449 Configuring rte_build_config.h using configuration 00:02:40.449 Message: 00:02:40.449 ================= 00:02:40.449 Applications Enabled 00:02:40.449 ================= 00:02:40.449 00:02:40.449 apps: 00:02:40.449 00:02:40.449 00:02:40.449 Message: 00:02:40.449 ================= 00:02:40.449 Libraries Enabled 00:02:40.449 ================= 00:02:40.449 00:02:40.449 libs: 00:02:40.449 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:40.449 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:40.449 cryptodev, dmadev, power, reorder, security, vhost, 00:02:40.449 00:02:40.449 Message: 00:02:40.449 =============== 00:02:40.449 Drivers Enabled 00:02:40.449 =============== 00:02:40.449 00:02:40.449 common: 00:02:40.449 00:02:40.449 bus: 00:02:40.449 pci, vdev, 00:02:40.449 mempool: 00:02:40.449 ring, 00:02:40.449 dma: 00:02:40.449 00:02:40.449 net: 00:02:40.449 00:02:40.449 crypto: 00:02:40.449 00:02:40.449 compress: 00:02:40.449 00:02:40.449 vdpa: 00:02:40.449 00:02:40.449 00:02:40.449 Message: 00:02:40.449 ================= 00:02:40.449 Content Skipped 00:02:40.449 ================= 00:02:40.449 00:02:40.449 apps: 00:02:40.449 dumpcap: explicitly disabled via build config 00:02:40.449 graph: explicitly disabled via build config 00:02:40.449 pdump: explicitly disabled via build config 00:02:40.449 proc-info: explicitly disabled via build config 00:02:40.449 test-acl: explicitly disabled via build config 00:02:40.449 test-bbdev: explicitly disabled via build config 00:02:40.449 test-cmdline: explicitly disabled via build config 00:02:40.449 test-compress-perf: explicitly disabled via build config 00:02:40.449 test-crypto-perf: explicitly disabled via build config 00:02:40.449 test-dma-perf: explicitly disabled via build config 00:02:40.449 test-eventdev: explicitly disabled via build config 00:02:40.449 test-fib: explicitly disabled via build config 00:02:40.449 test-flow-perf: explicitly disabled via build config 00:02:40.449 test-gpudev: explicitly disabled via build config 00:02:40.449 test-mldev: explicitly disabled via build config 00:02:40.449 test-pipeline: explicitly disabled via build config 00:02:40.449 test-pmd: explicitly disabled via build config 00:02:40.449 test-regex: explicitly disabled via build config 00:02:40.449 test-sad: explicitly disabled via build config 00:02:40.449 test-security-perf: explicitly disabled via build config 00:02:40.449 00:02:40.449 libs: 00:02:40.449 argparse: explicitly disabled via build config 00:02:40.449 metrics: explicitly disabled via build config 00:02:40.449 acl: explicitly disabled via build config 00:02:40.449 bbdev: explicitly disabled via build config 00:02:40.449 bitratestats: explicitly disabled via build config 00:02:40.449 bpf: explicitly disabled via build config 00:02:40.449 cfgfile: explicitly disabled via build config 00:02:40.449 distributor: explicitly disabled via build config 00:02:40.449 efd: explicitly disabled via build config 00:02:40.449 eventdev: explicitly disabled via build config 00:02:40.449 dispatcher: explicitly disabled via build config 00:02:40.449 gpudev: explicitly disabled via build config 00:02:40.449 gro: explicitly disabled via build config 00:02:40.449 gso: explicitly disabled via build config 00:02:40.449 ip_frag: explicitly disabled via build config 00:02:40.449 jobstats: explicitly disabled via build config 00:02:40.449 latencystats: explicitly disabled via build config 00:02:40.449 lpm: explicitly disabled via build config 00:02:40.449 member: explicitly disabled via build config 00:02:40.449 pcapng: explicitly disabled via build config 00:02:40.449 rawdev: explicitly disabled via build config 00:02:40.449 regexdev: explicitly disabled via build config 00:02:40.449 mldev: explicitly disabled via build config 00:02:40.449 rib: explicitly disabled via build config 00:02:40.449 sched: explicitly disabled via build config 00:02:40.449 stack: explicitly disabled via build config 00:02:40.449 ipsec: explicitly disabled via build config 00:02:40.449 pdcp: explicitly disabled via build config 00:02:40.449 fib: explicitly disabled via build config 00:02:40.449 port: explicitly disabled via build config 00:02:40.449 pdump: explicitly disabled via build config 00:02:40.449 table: explicitly disabled via build config 00:02:40.449 pipeline: explicitly disabled via build config 00:02:40.449 graph: explicitly disabled via build config 00:02:40.449 node: explicitly disabled via build config 00:02:40.449 00:02:40.449 drivers: 00:02:40.449 common/cpt: not in enabled drivers build config 00:02:40.449 common/dpaax: not in enabled drivers build config 00:02:40.449 common/iavf: not in enabled drivers build config 00:02:40.449 common/idpf: not in enabled drivers build config 00:02:40.449 common/ionic: not in enabled drivers build config 00:02:40.449 common/mvep: not in enabled drivers build config 00:02:40.449 common/octeontx: not in enabled drivers build config 00:02:40.449 bus/auxiliary: not in enabled drivers build config 00:02:40.449 bus/cdx: not in enabled drivers build config 00:02:40.449 bus/dpaa: not in enabled drivers build config 00:02:40.449 bus/fslmc: not in enabled drivers build config 00:02:40.449 bus/ifpga: not in enabled drivers build config 00:02:40.449 bus/platform: not in enabled drivers build config 00:02:40.450 bus/uacce: not in enabled drivers build config 00:02:40.450 bus/vmbus: not in enabled drivers build config 00:02:40.450 common/cnxk: not in enabled drivers build config 00:02:40.450 common/mlx5: not in enabled drivers build config 00:02:40.450 common/nfp: not in enabled drivers build config 00:02:40.450 common/nitrox: not in enabled drivers build config 00:02:40.450 common/qat: not in enabled drivers build config 00:02:40.450 common/sfc_efx: not in enabled drivers build config 00:02:40.450 mempool/bucket: not in enabled drivers build config 00:02:40.450 mempool/cnxk: not in enabled drivers build config 00:02:40.450 mempool/dpaa: not in enabled drivers build config 00:02:40.450 mempool/dpaa2: not in enabled drivers build config 00:02:40.450 mempool/octeontx: not in enabled drivers build config 00:02:40.450 mempool/stack: not in enabled drivers build config 00:02:40.450 dma/cnxk: not in enabled drivers build config 00:02:40.450 dma/dpaa: not in enabled drivers build config 00:02:40.450 dma/dpaa2: not in enabled drivers build config 00:02:40.450 dma/hisilicon: not in enabled drivers build config 00:02:40.450 dma/idxd: not in enabled drivers build config 00:02:40.450 dma/ioat: not in enabled drivers build config 00:02:40.450 dma/skeleton: not in enabled drivers build config 00:02:40.450 net/af_packet: not in enabled drivers build config 00:02:40.450 net/af_xdp: not in enabled drivers build config 00:02:40.450 net/ark: not in enabled drivers build config 00:02:40.450 net/atlantic: not in enabled drivers build config 00:02:40.450 net/avp: not in enabled drivers build config 00:02:40.450 net/axgbe: not in enabled drivers build config 00:02:40.450 net/bnx2x: not in enabled drivers build config 00:02:40.450 net/bnxt: not in enabled drivers build config 00:02:40.450 net/bonding: not in enabled drivers build config 00:02:40.450 net/cnxk: not in enabled drivers build config 00:02:40.450 net/cpfl: not in enabled drivers build config 00:02:40.450 net/cxgbe: not in enabled drivers build config 00:02:40.450 net/dpaa: not in enabled drivers build config 00:02:40.450 net/dpaa2: not in enabled drivers build config 00:02:40.450 net/e1000: not in enabled drivers build config 00:02:40.450 net/ena: not in enabled drivers build config 00:02:40.450 net/enetc: not in enabled drivers build config 00:02:40.450 net/enetfec: not in enabled drivers build config 00:02:40.450 net/enic: not in enabled drivers build config 00:02:40.450 net/failsafe: not in enabled drivers build config 00:02:40.450 net/fm10k: not in enabled drivers build config 00:02:40.450 net/gve: not in enabled drivers build config 00:02:40.450 net/hinic: not in enabled drivers build config 00:02:40.450 net/hns3: not in enabled drivers build config 00:02:40.450 net/i40e: not in enabled drivers build config 00:02:40.450 net/iavf: not in enabled drivers build config 00:02:40.450 net/ice: not in enabled drivers build config 00:02:40.450 net/idpf: not in enabled drivers build config 00:02:40.450 net/igc: not in enabled drivers build config 00:02:40.450 net/ionic: not in enabled drivers build config 00:02:40.450 net/ipn3ke: not in enabled drivers build config 00:02:40.450 net/ixgbe: not in enabled drivers build config 00:02:40.450 net/mana: not in enabled drivers build config 00:02:40.450 net/memif: not in enabled drivers build config 00:02:40.450 net/mlx4: not in enabled drivers build config 00:02:40.450 net/mlx5: not in enabled drivers build config 00:02:40.450 net/mvneta: not in enabled drivers build config 00:02:40.450 net/mvpp2: not in enabled drivers build config 00:02:40.450 net/netvsc: not in enabled drivers build config 00:02:40.450 net/nfb: not in enabled drivers build config 00:02:40.450 net/nfp: not in enabled drivers build config 00:02:40.450 net/ngbe: not in enabled drivers build config 00:02:40.450 net/null: not in enabled drivers build config 00:02:40.450 net/octeontx: not in enabled drivers build config 00:02:40.450 net/octeon_ep: not in enabled drivers build config 00:02:40.450 net/pcap: not in enabled drivers build config 00:02:40.450 net/pfe: not in enabled drivers build config 00:02:40.450 net/qede: not in enabled drivers build config 00:02:40.450 net/ring: not in enabled drivers build config 00:02:40.450 net/sfc: not in enabled drivers build config 00:02:40.450 net/softnic: not in enabled drivers build config 00:02:40.450 net/tap: not in enabled drivers build config 00:02:40.450 net/thunderx: not in enabled drivers build config 00:02:40.450 net/txgbe: not in enabled drivers build config 00:02:40.450 net/vdev_netvsc: not in enabled drivers build config 00:02:40.450 net/vhost: not in enabled drivers build config 00:02:40.450 net/virtio: not in enabled drivers build config 00:02:40.450 net/vmxnet3: not in enabled drivers build config 00:02:40.450 raw/*: missing internal dependency, "rawdev" 00:02:40.450 crypto/armv8: not in enabled drivers build config 00:02:40.450 crypto/bcmfs: not in enabled drivers build config 00:02:40.450 crypto/caam_jr: not in enabled drivers build config 00:02:40.450 crypto/ccp: not in enabled drivers build config 00:02:40.450 crypto/cnxk: not in enabled drivers build config 00:02:40.450 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.450 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.450 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.450 crypto/mlx5: not in enabled drivers build config 00:02:40.450 crypto/mvsam: not in enabled drivers build config 00:02:40.450 crypto/nitrox: not in enabled drivers build config 00:02:40.450 crypto/null: not in enabled drivers build config 00:02:40.450 crypto/octeontx: not in enabled drivers build config 00:02:40.450 crypto/openssl: not in enabled drivers build config 00:02:40.450 crypto/scheduler: not in enabled drivers build config 00:02:40.450 crypto/uadk: not in enabled drivers build config 00:02:40.450 crypto/virtio: not in enabled drivers build config 00:02:40.450 compress/isal: not in enabled drivers build config 00:02:40.450 compress/mlx5: not in enabled drivers build config 00:02:40.450 compress/nitrox: not in enabled drivers build config 00:02:40.450 compress/octeontx: not in enabled drivers build config 00:02:40.450 compress/zlib: not in enabled drivers build config 00:02:40.450 regex/*: missing internal dependency, "regexdev" 00:02:40.450 ml/*: missing internal dependency, "mldev" 00:02:40.450 vdpa/ifc: not in enabled drivers build config 00:02:40.450 vdpa/mlx5: not in enabled drivers build config 00:02:40.450 vdpa/nfp: not in enabled drivers build config 00:02:40.450 vdpa/sfc: not in enabled drivers build config 00:02:40.450 event/*: missing internal dependency, "eventdev" 00:02:40.450 baseband/*: missing internal dependency, "bbdev" 00:02:40.450 gpu/*: missing internal dependency, "gpudev" 00:02:40.450 00:02:40.450 00:02:40.450 Build targets in project: 84 00:02:40.450 00:02:40.450 DPDK 24.03.0 00:02:40.450 00:02:40.450 User defined options 00:02:40.450 buildtype : debug 00:02:40.450 default_library : shared 00:02:40.450 libdir : lib 00:02:40.450 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:40.450 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:40.450 c_link_args : 00:02:40.450 cpu_instruction_set: native 00:02:40.451 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:40.451 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:40.451 enable_docs : false 00:02:40.451 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:40.451 enable_kmods : false 00:02:40.451 tests : false 00:02:40.451 00:02:40.451 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.451 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:40.451 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:40.451 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:40.451 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.451 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.451 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.451 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.451 [7/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.451 [8/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.451 [9/267] Linking static target lib/librte_kvargs.a 00:02:40.451 [10/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.451 [11/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.451 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.451 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.451 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.451 [15/267] Linking static target lib/librte_pci.a 00:02:40.451 [16/267] Linking static target lib/librte_log.a 00:02:40.451 [17/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:40.451 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.451 [19/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:40.451 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.451 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.451 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.451 [23/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.451 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.451 [25/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.451 [26/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.716 [27/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.716 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.716 [29/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.716 [30/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.716 [31/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.716 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.716 [33/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.716 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.716 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.716 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.716 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.716 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.716 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.716 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.716 [41/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.716 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.716 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.716 [44/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.716 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.716 [46/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:40.716 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.716 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.716 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.716 [50/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.716 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.716 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.716 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.716 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.716 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.716 [56/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.716 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.716 [58/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.716 [59/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.716 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.716 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.716 [62/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.716 [63/267] Linking static target lib/librte_meter.a 00:02:40.716 [64/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.716 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.716 [66/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.716 [67/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.716 [68/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.716 [69/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.716 [70/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.716 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.716 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.716 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:40.716 [74/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.716 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.716 [76/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.716 [77/267] Linking static target lib/librte_ring.a 00:02:40.716 [78/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.716 [79/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.716 [80/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:40.716 [81/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.716 [82/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.716 [83/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.716 [84/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.716 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.716 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.716 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.975 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.975 [89/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.975 [90/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.975 [91/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.975 [92/267] Linking static target lib/librte_timer.a 00:02:40.975 [93/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.975 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.975 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.975 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.975 [97/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.975 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.975 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.975 [100/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.975 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.975 [102/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.975 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.975 [104/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.975 [105/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.975 [106/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.975 [107/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.975 [108/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.975 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.975 [110/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:40.975 [111/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.975 [112/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.975 [113/267] Linking static target lib/librte_reorder.a 00:02:40.975 [114/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.975 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.976 [116/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.976 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.976 [118/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.976 [119/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.976 [120/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.976 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.976 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.976 [123/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:40.976 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.976 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.976 [126/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:40.976 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.976 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.976 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.976 [130/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.976 [131/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.976 [132/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.976 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.976 [134/267] Linking static target lib/librte_rcu.a 00:02:40.976 [135/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.976 [136/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.976 [137/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.976 [138/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:40.976 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.976 [140/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.976 [141/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.976 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.976 [143/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.976 [144/267] Linking static target lib/librte_telemetry.a 00:02:41.235 [145/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.235 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.235 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.235 [148/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.235 [149/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.235 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:41.235 [151/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:41.235 [152/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.235 [153/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.235 [154/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.235 [155/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.235 [156/267] Linking static target lib/librte_mbuf.a 00:02:41.235 [157/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.235 [158/267] Linking static target drivers/librte_bus_vdev.a 00:02:41.235 [159/267] Linking target lib/librte_log.so.24.1 00:02:41.235 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.235 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.235 [162/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.235 [163/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:41.235 [164/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.235 [165/267] Linking static target lib/librte_mempool.a 00:02:41.235 [166/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.235 [167/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.235 [168/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.235 [169/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.235 [170/267] Linking static target lib/librte_compressdev.a 00:02:41.235 [171/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.235 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:41.235 [173/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.235 [174/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.235 [175/267] Linking static target lib/librte_cmdline.a 00:02:41.235 [176/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.235 [177/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.235 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:41.235 [179/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.235 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:41.235 [181/267] Linking static target lib/librte_eal.a 00:02:41.235 [182/267] Linking static target lib/librte_net.a 00:02:41.235 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.236 [184/267] Linking static target lib/librte_power.a 00:02:41.236 [185/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.236 [186/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.236 [187/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.236 [188/267] Linking static target drivers/librte_mempool_ring.a 00:02:41.236 [189/267] Linking static target lib/librte_dmadev.a 00:02:41.236 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.236 [191/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:41.236 [192/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:41.236 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.236 [194/267] Linking static target lib/librte_security.a 00:02:41.236 [195/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.236 [196/267] Linking target lib/librte_kvargs.so.24.1 00:02:41.236 [197/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.236 [198/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.496 [199/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.496 [200/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.496 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.496 [202/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.496 [203/267] Linking static target drivers/librte_bus_pci.a 00:02:41.496 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:41.496 [205/267] Linking static target lib/librte_hash.a 00:02:41.496 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.496 [207/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.496 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.496 [209/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.757 [210/267] Linking static target lib/librte_cryptodev.a 00:02:41.757 [211/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.757 [212/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.757 [213/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.758 [214/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.758 [215/267] Linking static target lib/librte_ethdev.a 00:02:41.758 [216/267] Linking target lib/librte_telemetry.so.24.1 00:02:42.018 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.018 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.018 [219/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:42.018 [220/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.018 [221/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.018 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.278 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.278 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.539 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.539 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.800 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.800 [228/267] Linking static target lib/librte_vhost.a 00:02:43.744 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.131 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.725 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.109 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.109 [233/267] Linking target lib/librte_eal.so.24.1 00:02:53.109 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.109 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:53.109 [236/267] Linking target lib/librte_meter.so.24.1 00:02:53.109 [237/267] Linking target lib/librte_ring.so.24.1 00:02:53.109 [238/267] Linking target lib/librte_timer.so.24.1 00:02:53.109 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.109 [240/267] Linking target lib/librte_pci.so.24.1 00:02:53.374 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.374 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.374 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.374 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.374 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.374 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.374 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:53.374 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:53.686 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.686 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.686 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.687 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:53.687 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.687 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:53.687 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:53.687 [256/267] Linking target lib/librte_net.so.24.1 00:02:53.687 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:53.959 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.959 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.959 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:53.959 [261/267] Linking target lib/librte_hash.so.24.1 00:02:53.959 [262/267] Linking target lib/librte_security.so.24.1 00:02:53.959 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:54.222 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:54.222 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:54.222 [266/267] Linking target lib/librte_power.so.24.1 00:02:54.222 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:54.222 INFO: autodetecting backend as ninja 00:02:54.222 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:55.166 CC lib/ut_mock/mock.o 00:02:55.166 CC lib/log/log.o 00:02:55.166 CC lib/log/log_flags.o 00:02:55.166 CC lib/log/log_deprecated.o 00:02:55.166 CC lib/ut/ut.o 00:02:55.428 LIB libspdk_ut_mock.a 00:02:55.428 LIB libspdk_log.a 00:02:55.428 LIB libspdk_ut.a 00:02:55.428 SO libspdk_ut_mock.so.6.0 00:02:55.428 SO libspdk_ut.so.2.0 00:02:55.428 SO libspdk_log.so.7.0 00:02:55.428 SYMLINK libspdk_ut_mock.so 00:02:55.689 SYMLINK libspdk_ut.so 00:02:55.689 SYMLINK libspdk_log.so 00:02:55.951 CC lib/util/base64.o 00:02:55.951 CC lib/util/bit_array.o 00:02:55.951 CC lib/util/cpuset.o 00:02:55.951 CC lib/util/crc16.o 00:02:55.951 CC lib/util/crc32.o 00:02:55.951 CC lib/util/crc32c.o 00:02:55.951 CC lib/util/crc32_ieee.o 00:02:55.951 CC lib/util/crc64.o 00:02:55.951 CC lib/util/dif.o 00:02:55.951 CC lib/util/fd.o 00:02:55.951 CC lib/util/file.o 00:02:55.951 CC lib/util/hexlify.o 00:02:55.951 CC lib/util/iov.o 00:02:55.951 CC lib/util/pipe.o 00:02:55.951 CC lib/util/math.o 00:02:55.951 CC lib/ioat/ioat.o 00:02:55.951 CC lib/dma/dma.o 00:02:55.951 CC lib/util/strerror_tls.o 00:02:55.951 CC lib/util/string.o 00:02:55.951 CC lib/util/uuid.o 00:02:55.951 CC lib/util/fd_group.o 00:02:55.951 CXX lib/trace_parser/trace.o 00:02:55.951 CC lib/util/xor.o 00:02:55.951 CC lib/util/zipf.o 00:02:56.212 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.212 CC lib/vfio_user/host/vfio_user.o 00:02:56.212 LIB libspdk_dma.a 00:02:56.212 SO libspdk_dma.so.4.0 00:02:56.212 LIB libspdk_ioat.a 00:02:56.212 SO libspdk_ioat.so.7.0 00:02:56.212 SYMLINK libspdk_dma.so 00:02:56.212 LIB libspdk_vfio_user.a 00:02:56.212 SYMLINK libspdk_ioat.so 00:02:56.473 SO libspdk_vfio_user.so.5.0 00:02:56.473 LIB libspdk_util.a 00:02:56.473 SYMLINK libspdk_vfio_user.so 00:02:56.473 SO libspdk_util.so.9.0 00:02:56.473 SYMLINK libspdk_util.so 00:02:56.734 LIB libspdk_trace_parser.a 00:02:56.734 SO libspdk_trace_parser.so.5.0 00:02:56.996 SYMLINK libspdk_trace_parser.so 00:02:56.996 CC lib/vmd/vmd.o 00:02:56.996 CC lib/vmd/led.o 00:02:56.996 CC lib/json/json_parse.o 00:02:56.996 CC lib/json/json_util.o 00:02:56.996 CC lib/json/json_write.o 00:02:56.996 CC lib/rdma/common.o 00:02:56.996 CC lib/rdma/rdma_verbs.o 00:02:56.996 CC lib/env_dpdk/env.o 00:02:56.996 CC lib/conf/conf.o 00:02:56.996 CC lib/idxd/idxd.o 00:02:56.996 CC lib/env_dpdk/memory.o 00:02:56.996 CC lib/idxd/idxd_user.o 00:02:56.996 CC lib/env_dpdk/pci.o 00:02:56.996 CC lib/idxd/idxd_kernel.o 00:02:56.996 CC lib/env_dpdk/init.o 00:02:56.996 CC lib/env_dpdk/threads.o 00:02:56.996 CC lib/env_dpdk/pci_ioat.o 00:02:56.996 CC lib/env_dpdk/pci_virtio.o 00:02:56.996 CC lib/env_dpdk/pci_vmd.o 00:02:56.996 CC lib/env_dpdk/pci_idxd.o 00:02:56.996 CC lib/env_dpdk/pci_event.o 00:02:56.996 CC lib/env_dpdk/sigbus_handler.o 00:02:56.996 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.996 CC lib/env_dpdk/pci_dpdk.o 00:02:56.996 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.257 LIB libspdk_conf.a 00:02:57.257 LIB libspdk_rdma.a 00:02:57.257 LIB libspdk_json.a 00:02:57.257 SO libspdk_conf.so.6.0 00:02:57.257 SO libspdk_json.so.6.0 00:02:57.257 SO libspdk_rdma.so.6.0 00:02:57.257 SYMLINK libspdk_conf.so 00:02:57.257 SYMLINK libspdk_json.so 00:02:57.257 SYMLINK libspdk_rdma.so 00:02:57.518 LIB libspdk_idxd.a 00:02:57.518 SO libspdk_idxd.so.12.0 00:02:57.518 LIB libspdk_vmd.a 00:02:57.518 SO libspdk_vmd.so.6.0 00:02:57.518 SYMLINK libspdk_idxd.so 00:02:57.518 SYMLINK libspdk_vmd.so 00:02:57.778 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.778 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.778 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.778 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:58.082 LIB libspdk_jsonrpc.a 00:02:58.082 SO libspdk_jsonrpc.so.6.0 00:02:58.082 SYMLINK libspdk_jsonrpc.so 00:02:58.082 LIB libspdk_env_dpdk.a 00:02:58.343 SO libspdk_env_dpdk.so.14.0 00:02:58.343 SYMLINK libspdk_env_dpdk.so 00:02:58.343 CC lib/rpc/rpc.o 00:02:58.602 LIB libspdk_rpc.a 00:02:58.602 SO libspdk_rpc.so.6.0 00:02:58.863 SYMLINK libspdk_rpc.so 00:02:59.123 CC lib/keyring/keyring.o 00:02:59.123 CC lib/keyring/keyring_rpc.o 00:02:59.123 CC lib/trace/trace.o 00:02:59.123 CC lib/trace/trace_flags.o 00:02:59.123 CC lib/trace/trace_rpc.o 00:02:59.123 CC lib/notify/notify.o 00:02:59.123 CC lib/notify/notify_rpc.o 00:02:59.384 LIB libspdk_notify.a 00:02:59.384 LIB libspdk_keyring.a 00:02:59.384 SO libspdk_notify.so.6.0 00:02:59.384 LIB libspdk_trace.a 00:02:59.384 SO libspdk_keyring.so.1.0 00:02:59.384 SYMLINK libspdk_notify.so 00:02:59.384 SO libspdk_trace.so.10.0 00:02:59.384 SYMLINK libspdk_keyring.so 00:02:59.384 SYMLINK libspdk_trace.so 00:02:59.955 CC lib/sock/sock.o 00:02:59.955 CC lib/sock/sock_rpc.o 00:02:59.955 CC lib/thread/iobuf.o 00:02:59.955 CC lib/thread/thread.o 00:03:00.216 LIB libspdk_sock.a 00:03:00.216 SO libspdk_sock.so.10.0 00:03:00.216 SYMLINK libspdk_sock.so 00:03:00.476 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.476 CC lib/nvme/nvme_ctrlr.o 00:03:00.476 CC lib/nvme/nvme_fabric.o 00:03:00.476 CC lib/nvme/nvme_ns_cmd.o 00:03:00.476 CC lib/nvme/nvme_ns.o 00:03:00.476 CC lib/nvme/nvme_pcie_common.o 00:03:00.476 CC lib/nvme/nvme_qpair.o 00:03:00.476 CC lib/nvme/nvme_pcie.o 00:03:00.476 CC lib/nvme/nvme.o 00:03:00.476 CC lib/nvme/nvme_quirks.o 00:03:00.476 CC lib/nvme/nvme_transport.o 00:03:00.476 CC lib/nvme/nvme_discovery.o 00:03:00.476 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:00.476 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:00.476 CC lib/nvme/nvme_tcp.o 00:03:00.476 CC lib/nvme/nvme_opal.o 00:03:00.476 CC lib/nvme/nvme_io_msg.o 00:03:00.476 CC lib/nvme/nvme_poll_group.o 00:03:00.476 CC lib/nvme/nvme_zns.o 00:03:00.476 CC lib/nvme/nvme_stubs.o 00:03:00.476 CC lib/nvme/nvme_auth.o 00:03:00.476 CC lib/nvme/nvme_cuse.o 00:03:00.737 CC lib/nvme/nvme_vfio_user.o 00:03:00.737 CC lib/nvme/nvme_rdma.o 00:03:00.997 LIB libspdk_thread.a 00:03:00.997 SO libspdk_thread.so.10.0 00:03:01.259 SYMLINK libspdk_thread.so 00:03:01.520 CC lib/accel/accel.o 00:03:01.520 CC lib/accel/accel_rpc.o 00:03:01.520 CC lib/accel/accel_sw.o 00:03:01.520 CC lib/init/json_config.o 00:03:01.520 CC lib/init/subsystem.o 00:03:01.520 CC lib/init/subsystem_rpc.o 00:03:01.520 CC lib/init/rpc.o 00:03:01.520 CC lib/vfu_tgt/tgt_endpoint.o 00:03:01.520 CC lib/vfu_tgt/tgt_rpc.o 00:03:01.520 CC lib/virtio/virtio_vhost_user.o 00:03:01.520 CC lib/virtio/virtio.o 00:03:01.520 CC lib/blob/blobstore.o 00:03:01.520 CC lib/blob/zeroes.o 00:03:01.520 CC lib/virtio/virtio_pci.o 00:03:01.521 CC lib/virtio/virtio_vfio_user.o 00:03:01.521 CC lib/blob/request.o 00:03:01.521 CC lib/blob/blob_bs_dev.o 00:03:01.782 LIB libspdk_init.a 00:03:01.782 SO libspdk_init.so.5.0 00:03:01.782 LIB libspdk_vfu_tgt.a 00:03:01.782 LIB libspdk_virtio.a 00:03:01.782 SYMLINK libspdk_init.so 00:03:01.782 SO libspdk_vfu_tgt.so.3.0 00:03:01.782 SO libspdk_virtio.so.7.0 00:03:02.043 SYMLINK libspdk_vfu_tgt.so 00:03:02.043 SYMLINK libspdk_virtio.so 00:03:02.305 CC lib/event/app.o 00:03:02.305 CC lib/event/reactor.o 00:03:02.305 CC lib/event/log_rpc.o 00:03:02.305 CC lib/event/app_rpc.o 00:03:02.305 CC lib/event/scheduler_static.o 00:03:02.305 LIB libspdk_accel.a 00:03:02.305 SO libspdk_accel.so.15.0 00:03:02.305 LIB libspdk_nvme.a 00:03:02.567 SYMLINK libspdk_accel.so 00:03:02.567 SO libspdk_nvme.so.13.0 00:03:02.567 LIB libspdk_event.a 00:03:02.567 SO libspdk_event.so.13.1 00:03:02.828 SYMLINK libspdk_event.so 00:03:02.828 CC lib/bdev/bdev.o 00:03:02.828 CC lib/bdev/bdev_rpc.o 00:03:02.828 CC lib/bdev/part.o 00:03:02.828 CC lib/bdev/bdev_zone.o 00:03:02.828 CC lib/bdev/scsi_nvme.o 00:03:02.828 SYMLINK libspdk_nvme.so 00:03:03.771 LIB libspdk_blob.a 00:03:03.771 SO libspdk_blob.so.11.0 00:03:04.033 SYMLINK libspdk_blob.so 00:03:04.294 CC lib/lvol/lvol.o 00:03:04.294 CC lib/blobfs/blobfs.o 00:03:04.294 CC lib/blobfs/tree.o 00:03:04.866 LIB libspdk_bdev.a 00:03:05.127 LIB libspdk_blobfs.a 00:03:05.127 SO libspdk_bdev.so.15.0 00:03:05.127 SO libspdk_blobfs.so.10.0 00:03:05.127 LIB libspdk_lvol.a 00:03:05.127 SYMLINK libspdk_blobfs.so 00:03:05.127 SO libspdk_lvol.so.10.0 00:03:05.127 SYMLINK libspdk_bdev.so 00:03:05.127 SYMLINK libspdk_lvol.so 00:03:05.389 CC lib/ublk/ublk_rpc.o 00:03:05.389 CC lib/ublk/ublk.o 00:03:05.389 CC lib/scsi/dev.o 00:03:05.389 CC lib/scsi/lun.o 00:03:05.389 CC lib/nbd/nbd.o 00:03:05.389 CC lib/scsi/port.o 00:03:05.389 CC lib/nvmf/ctrlr.o 00:03:05.389 CC lib/nbd/nbd_rpc.o 00:03:05.389 CC lib/scsi/scsi.o 00:03:05.389 CC lib/ftl/ftl_core.o 00:03:05.389 CC lib/nvmf/ctrlr_discovery.o 00:03:05.389 CC lib/ftl/ftl_init.o 00:03:05.389 CC lib/scsi/scsi_bdev.o 00:03:05.389 CC lib/nvmf/ctrlr_bdev.o 00:03:05.389 CC lib/scsi/scsi_pr.o 00:03:05.389 CC lib/ftl/ftl_layout.o 00:03:05.649 CC lib/nvmf/subsystem.o 00:03:05.649 CC lib/nvmf/nvmf.o 00:03:05.649 CC lib/scsi/scsi_rpc.o 00:03:05.649 CC lib/ftl/ftl_debug.o 00:03:05.649 CC lib/scsi/task.o 00:03:05.649 CC lib/ftl/ftl_io.o 00:03:05.649 CC lib/nvmf/nvmf_rpc.o 00:03:05.649 CC lib/ftl/ftl_sb.o 00:03:05.649 CC lib/nvmf/transport.o 00:03:05.649 CC lib/ftl/ftl_l2p.o 00:03:05.649 CC lib/ftl/ftl_l2p_flat.o 00:03:05.649 CC lib/nvmf/tcp.o 00:03:05.649 CC lib/nvmf/stubs.o 00:03:05.649 CC lib/ftl/ftl_nv_cache.o 00:03:05.649 CC lib/nvmf/mdns_server.o 00:03:05.649 CC lib/ftl/ftl_band.o 00:03:05.649 CC lib/nvmf/vfio_user.o 00:03:05.649 CC lib/nvmf/auth.o 00:03:05.649 CC lib/ftl/ftl_band_ops.o 00:03:05.649 CC lib/nvmf/rdma.o 00:03:05.649 CC lib/ftl/ftl_writer.o 00:03:05.649 CC lib/ftl/ftl_rq.o 00:03:05.649 CC lib/ftl/ftl_reloc.o 00:03:05.649 CC lib/ftl/ftl_l2p_cache.o 00:03:05.649 CC lib/ftl/ftl_p2l.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.649 CC lib/ftl/utils/ftl_conf.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.649 CC lib/ftl/utils/ftl_md.o 00:03:05.649 CC lib/ftl/utils/ftl_property.o 00:03:05.649 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.649 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.649 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.649 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.649 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.653 CC lib/ftl/utils/ftl_mempool.o 00:03:05.653 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.653 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:05.653 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.653 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:05.653 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:05.653 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.653 CC lib/ftl/base/ftl_base_dev.o 00:03:05.653 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:05.653 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:05.653 CC lib/ftl/ftl_trace.o 00:03:05.653 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.224 LIB libspdk_nbd.a 00:03:06.225 SO libspdk_nbd.so.7.0 00:03:06.225 LIB libspdk_scsi.a 00:03:06.225 LIB libspdk_ublk.a 00:03:06.225 SYMLINK libspdk_nbd.so 00:03:06.225 SO libspdk_ublk.so.3.0 00:03:06.225 SO libspdk_scsi.so.9.0 00:03:06.225 SYMLINK libspdk_ublk.so 00:03:06.225 SYMLINK libspdk_scsi.so 00:03:06.485 LIB libspdk_ftl.a 00:03:06.745 CC lib/iscsi/init_grp.o 00:03:06.745 CC lib/iscsi/conn.o 00:03:06.745 CC lib/vhost/vhost.o 00:03:06.745 CC lib/iscsi/iscsi.o 00:03:06.745 CC lib/iscsi/md5.o 00:03:06.745 CC lib/vhost/vhost_rpc.o 00:03:06.745 CC lib/iscsi/param.o 00:03:06.745 CC lib/vhost/vhost_scsi.o 00:03:06.745 CC lib/iscsi/portal_grp.o 00:03:06.745 CC lib/iscsi/iscsi_subsystem.o 00:03:06.745 CC lib/vhost/vhost_blk.o 00:03:06.745 CC lib/iscsi/tgt_node.o 00:03:06.745 CC lib/iscsi/iscsi_rpc.o 00:03:06.745 CC lib/vhost/rte_vhost_user.o 00:03:06.745 CC lib/iscsi/task.o 00:03:06.745 SO libspdk_ftl.so.9.0 00:03:07.007 SYMLINK libspdk_ftl.so 00:03:07.269 LIB libspdk_nvmf.a 00:03:07.269 SO libspdk_nvmf.so.19.0 00:03:07.564 SYMLINK libspdk_nvmf.so 00:03:07.564 LIB libspdk_vhost.a 00:03:07.564 SO libspdk_vhost.so.8.0 00:03:07.830 SYMLINK libspdk_vhost.so 00:03:07.830 LIB libspdk_iscsi.a 00:03:07.830 SO libspdk_iscsi.so.8.0 00:03:08.090 SYMLINK libspdk_iscsi.so 00:03:08.662 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.662 CC module/vfu_device/vfu_virtio.o 00:03:08.662 CC module/vfu_device/vfu_virtio_blk.o 00:03:08.662 CC module/vfu_device/vfu_virtio_scsi.o 00:03:08.662 CC module/vfu_device/vfu_virtio_rpc.o 00:03:08.662 CC module/accel/iaa/accel_iaa.o 00:03:08.662 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.662 CC module/accel/ioat/accel_ioat.o 00:03:08.662 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.662 LIB libspdk_env_dpdk_rpc.a 00:03:08.662 CC module/keyring/file/keyring.o 00:03:08.662 CC module/blob/bdev/blob_bdev.o 00:03:08.662 CC module/keyring/file/keyring_rpc.o 00:03:08.662 CC module/keyring/linux/keyring.o 00:03:08.662 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.662 CC module/keyring/linux/keyring_rpc.o 00:03:08.662 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.662 CC module/accel/error/accel_error.o 00:03:08.662 CC module/accel/error/accel_error_rpc.o 00:03:08.662 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.662 CC module/sock/posix/posix.o 00:03:08.662 CC module/accel/dsa/accel_dsa.o 00:03:08.662 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.662 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.923 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.923 LIB libspdk_keyring_linux.a 00:03:08.923 LIB libspdk_scheduler_gscheduler.a 00:03:08.923 LIB libspdk_keyring_file.a 00:03:08.923 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.923 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.923 LIB libspdk_accel_ioat.a 00:03:08.923 SO libspdk_keyring_linux.so.1.0 00:03:08.923 LIB libspdk_accel_iaa.a 00:03:08.923 LIB libspdk_accel_error.a 00:03:08.923 SO libspdk_keyring_file.so.1.0 00:03:08.923 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.923 LIB libspdk_scheduler_dynamic.a 00:03:08.923 SO libspdk_accel_ioat.so.6.0 00:03:08.923 SO libspdk_accel_error.so.2.0 00:03:08.923 SO libspdk_accel_iaa.so.3.0 00:03:08.923 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.923 LIB libspdk_blob_bdev.a 00:03:08.923 SYMLINK libspdk_keyring_linux.so 00:03:08.923 SO libspdk_scheduler_dynamic.so.4.0 00:03:09.185 LIB libspdk_accel_dsa.a 00:03:09.185 SYMLINK libspdk_keyring_file.so 00:03:09.185 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:09.185 SO libspdk_blob_bdev.so.11.0 00:03:09.185 SYMLINK libspdk_accel_error.so 00:03:09.185 SYMLINK libspdk_accel_ioat.so 00:03:09.185 SYMLINK libspdk_accel_iaa.so 00:03:09.185 SO libspdk_accel_dsa.so.5.0 00:03:09.185 SYMLINK libspdk_scheduler_dynamic.so 00:03:09.185 SYMLINK libspdk_blob_bdev.so 00:03:09.185 SYMLINK libspdk_accel_dsa.so 00:03:09.185 LIB libspdk_vfu_device.a 00:03:09.185 SO libspdk_vfu_device.so.3.0 00:03:09.185 SYMLINK libspdk_vfu_device.so 00:03:09.447 LIB libspdk_sock_posix.a 00:03:09.447 SO libspdk_sock_posix.so.6.0 00:03:09.709 SYMLINK libspdk_sock_posix.so 00:03:09.709 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.709 CC module/bdev/aio/bdev_aio.o 00:03:09.709 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.709 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.709 CC module/bdev/malloc/bdev_malloc.o 00:03:09.709 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.709 CC module/bdev/error/vbdev_error.o 00:03:09.709 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.709 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.709 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.709 CC module/bdev/raid/bdev_raid.o 00:03:09.709 CC module/bdev/null/bdev_null.o 00:03:09.709 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.709 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.709 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.709 CC module/bdev/delay/vbdev_delay.o 00:03:09.709 CC module/bdev/ftl/bdev_ftl.o 00:03:09.709 CC module/bdev/split/vbdev_split.o 00:03:09.709 CC module/bdev/null/bdev_null_rpc.o 00:03:09.709 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.709 CC module/bdev/raid/raid0.o 00:03:09.709 CC module/bdev/gpt/gpt.o 00:03:09.709 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.709 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.709 CC module/bdev/raid/raid1.o 00:03:09.709 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.709 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.709 CC module/bdev/nvme/bdev_nvme.o 00:03:09.709 CC module/bdev/raid/concat.o 00:03:09.709 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.709 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.709 CC module/bdev/nvme/nvme_rpc.o 00:03:09.709 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.709 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.709 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.709 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.709 CC module/bdev/nvme/vbdev_opal.o 00:03:09.709 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.709 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.709 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.709 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.709 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.970 LIB libspdk_blobfs_bdev.a 00:03:09.970 LIB libspdk_bdev_zone_block.a 00:03:09.970 LIB libspdk_bdev_split.a 00:03:09.970 LIB libspdk_bdev_null.a 00:03:09.970 SO libspdk_blobfs_bdev.so.6.0 00:03:09.970 LIB libspdk_bdev_error.a 00:03:09.970 SO libspdk_bdev_zone_block.so.6.0 00:03:09.970 LIB libspdk_bdev_passthru.a 00:03:09.970 LIB libspdk_bdev_aio.a 00:03:09.970 SO libspdk_bdev_split.so.6.0 00:03:09.970 SO libspdk_bdev_null.so.6.0 00:03:09.970 LIB libspdk_bdev_gpt.a 00:03:09.970 SO libspdk_bdev_error.so.6.0 00:03:09.970 LIB libspdk_bdev_malloc.a 00:03:09.970 SYMLINK libspdk_blobfs_bdev.so 00:03:09.970 LIB libspdk_bdev_iscsi.a 00:03:09.970 LIB libspdk_bdev_ftl.a 00:03:09.970 SO libspdk_bdev_passthru.so.6.0 00:03:09.970 LIB libspdk_bdev_delay.a 00:03:09.970 SYMLINK libspdk_bdev_zone_block.so 00:03:09.970 SO libspdk_bdev_aio.so.6.0 00:03:09.970 SO libspdk_bdev_malloc.so.6.0 00:03:09.970 SO libspdk_bdev_gpt.so.6.0 00:03:09.970 SYMLINK libspdk_bdev_null.so 00:03:09.970 SO libspdk_bdev_iscsi.so.6.0 00:03:10.232 SYMLINK libspdk_bdev_split.so 00:03:10.232 SO libspdk_bdev_ftl.so.6.0 00:03:10.232 SO libspdk_bdev_delay.so.6.0 00:03:10.232 SYMLINK libspdk_bdev_error.so 00:03:10.232 SYMLINK libspdk_bdev_passthru.so 00:03:10.232 SYMLINK libspdk_bdev_aio.so 00:03:10.232 SYMLINK libspdk_bdev_gpt.so 00:03:10.232 SYMLINK libspdk_bdev_malloc.so 00:03:10.232 SYMLINK libspdk_bdev_iscsi.so 00:03:10.232 SYMLINK libspdk_bdev_ftl.so 00:03:10.232 LIB libspdk_bdev_lvol.a 00:03:10.232 SYMLINK libspdk_bdev_delay.so 00:03:10.232 SO libspdk_bdev_lvol.so.6.0 00:03:10.232 LIB libspdk_bdev_virtio.a 00:03:10.232 SO libspdk_bdev_virtio.so.6.0 00:03:10.232 SYMLINK libspdk_bdev_lvol.so 00:03:10.494 SYMLINK libspdk_bdev_virtio.so 00:03:10.494 LIB libspdk_bdev_raid.a 00:03:10.755 SO libspdk_bdev_raid.so.6.0 00:03:10.755 SYMLINK libspdk_bdev_raid.so 00:03:11.699 LIB libspdk_bdev_nvme.a 00:03:11.699 SO libspdk_bdev_nvme.so.7.0 00:03:11.699 SYMLINK libspdk_bdev_nvme.so 00:03:12.642 CC module/event/subsystems/sock/sock.o 00:03:12.642 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.642 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:12.642 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.642 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.642 CC module/event/subsystems/vmd/vmd.o 00:03:12.642 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.642 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.642 CC module/event/subsystems/keyring/keyring.o 00:03:12.642 LIB libspdk_event_vfu_tgt.a 00:03:12.642 LIB libspdk_event_sock.a 00:03:12.642 LIB libspdk_event_keyring.a 00:03:12.642 LIB libspdk_event_scheduler.a 00:03:12.642 LIB libspdk_event_vhost_blk.a 00:03:12.642 SO libspdk_event_vfu_tgt.so.3.0 00:03:12.642 LIB libspdk_event_iobuf.a 00:03:12.642 LIB libspdk_event_vmd.a 00:03:12.642 SO libspdk_event_sock.so.5.0 00:03:12.642 SO libspdk_event_keyring.so.1.0 00:03:12.642 SO libspdk_event_scheduler.so.4.0 00:03:12.642 SO libspdk_event_vhost_blk.so.3.0 00:03:12.642 SO libspdk_event_iobuf.so.3.0 00:03:12.642 SO libspdk_event_vmd.so.6.0 00:03:12.642 SYMLINK libspdk_event_vfu_tgt.so 00:03:12.642 SYMLINK libspdk_event_sock.so 00:03:12.642 SYMLINK libspdk_event_scheduler.so 00:03:12.642 SYMLINK libspdk_event_keyring.so 00:03:12.642 SYMLINK libspdk_event_vhost_blk.so 00:03:12.642 SYMLINK libspdk_event_iobuf.so 00:03:12.642 SYMLINK libspdk_event_vmd.so 00:03:13.214 CC module/event/subsystems/accel/accel.o 00:03:13.215 LIB libspdk_event_accel.a 00:03:13.215 SO libspdk_event_accel.so.6.0 00:03:13.475 SYMLINK libspdk_event_accel.so 00:03:13.736 CC module/event/subsystems/bdev/bdev.o 00:03:13.997 LIB libspdk_event_bdev.a 00:03:13.997 SO libspdk_event_bdev.so.6.0 00:03:13.997 SYMLINK libspdk_event_bdev.so 00:03:14.258 CC module/event/subsystems/ublk/ublk.o 00:03:14.258 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.258 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.258 CC module/event/subsystems/scsi/scsi.o 00:03:14.258 CC module/event/subsystems/nbd/nbd.o 00:03:14.520 LIB libspdk_event_nbd.a 00:03:14.520 LIB libspdk_event_ublk.a 00:03:14.520 LIB libspdk_event_scsi.a 00:03:14.520 SO libspdk_event_nbd.so.6.0 00:03:14.520 SO libspdk_event_ublk.so.3.0 00:03:14.520 SO libspdk_event_scsi.so.6.0 00:03:14.520 LIB libspdk_event_nvmf.a 00:03:14.520 SYMLINK libspdk_event_ublk.so 00:03:14.520 SYMLINK libspdk_event_nbd.so 00:03:14.520 SYMLINK libspdk_event_scsi.so 00:03:14.520 SO libspdk_event_nvmf.so.6.0 00:03:14.781 SYMLINK libspdk_event_nvmf.so 00:03:15.043 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:15.043 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.043 LIB libspdk_event_vhost_scsi.a 00:03:15.043 LIB libspdk_event_iscsi.a 00:03:15.043 SO libspdk_event_vhost_scsi.so.3.0 00:03:15.304 SO libspdk_event_iscsi.so.6.0 00:03:15.304 SYMLINK libspdk_event_vhost_scsi.so 00:03:15.304 SYMLINK libspdk_event_iscsi.so 00:03:15.304 SO libspdk.so.6.0 00:03:15.564 SYMLINK libspdk.so 00:03:15.829 CXX app/trace/trace.o 00:03:15.829 CC app/spdk_nvme_perf/perf.o 00:03:15.829 CC app/trace_record/trace_record.o 00:03:15.829 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.829 CC app/spdk_lspci/spdk_lspci.o 00:03:15.829 CC test/rpc_client/rpc_client_test.o 00:03:15.829 TEST_HEADER include/spdk/accel.h 00:03:15.829 TEST_HEADER include/spdk/barrier.h 00:03:15.829 TEST_HEADER include/spdk/base64.h 00:03:15.829 TEST_HEADER include/spdk/assert.h 00:03:15.829 TEST_HEADER include/spdk/bdev_module.h 00:03:15.829 TEST_HEADER include/spdk/bdev.h 00:03:15.829 TEST_HEADER include/spdk/bit_array.h 00:03:15.829 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.829 TEST_HEADER include/spdk/bit_pool.h 00:03:15.829 CC app/spdk_nvme_identify/identify.o 00:03:15.829 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.829 TEST_HEADER include/spdk/accel_module.h 00:03:15.829 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.829 TEST_HEADER include/spdk/blob.h 00:03:15.829 TEST_HEADER include/spdk/conf.h 00:03:15.829 TEST_HEADER include/spdk/config.h 00:03:15.829 TEST_HEADER include/spdk/crc16.h 00:03:15.829 TEST_HEADER include/spdk/cpuset.h 00:03:15.829 TEST_HEADER include/spdk/blobfs.h 00:03:15.829 TEST_HEADER include/spdk/crc64.h 00:03:15.829 TEST_HEADER include/spdk/dif.h 00:03:15.829 TEST_HEADER include/spdk/dma.h 00:03:15.829 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.829 TEST_HEADER include/spdk/env.h 00:03:15.829 TEST_HEADER include/spdk/endian.h 00:03:15.829 TEST_HEADER include/spdk/crc32.h 00:03:15.829 TEST_HEADER include/spdk/event.h 00:03:15.829 CC app/spdk_dd/spdk_dd.o 00:03:15.829 TEST_HEADER include/spdk/fd.h 00:03:15.829 TEST_HEADER include/spdk/ftl.h 00:03:15.829 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.829 TEST_HEADER include/spdk/hexlify.h 00:03:15.829 TEST_HEADER include/spdk/histogram_data.h 00:03:15.829 CC app/spdk_top/spdk_top.o 00:03:15.829 TEST_HEADER include/spdk/idxd.h 00:03:15.829 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.829 TEST_HEADER include/spdk/init.h 00:03:15.829 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.829 TEST_HEADER include/spdk/ioat.h 00:03:15.829 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.829 TEST_HEADER include/spdk/fd_group.h 00:03:15.829 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.829 TEST_HEADER include/spdk/json.h 00:03:15.829 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.829 TEST_HEADER include/spdk/keyring.h 00:03:15.829 TEST_HEADER include/spdk/keyring_module.h 00:03:15.829 TEST_HEADER include/spdk/file.h 00:03:15.830 TEST_HEADER include/spdk/log.h 00:03:15.830 TEST_HEADER include/spdk/likely.h 00:03:15.830 TEST_HEADER include/spdk/memory.h 00:03:15.830 TEST_HEADER include/spdk/mmio.h 00:03:15.830 TEST_HEADER include/spdk/lvol.h 00:03:15.830 TEST_HEADER include/spdk/nbd.h 00:03:15.830 CC app/nvmf_tgt/nvmf_main.o 00:03:15.830 TEST_HEADER include/spdk/nvme.h 00:03:15.830 CC app/vhost/vhost.o 00:03:15.830 TEST_HEADER include/spdk/notify.h 00:03:15.830 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.830 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.830 CC app/spdk_tgt/spdk_tgt.o 00:03:15.830 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.830 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.830 TEST_HEADER include/spdk/nvmf.h 00:03:15.830 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.830 TEST_HEADER include/spdk/opal.h 00:03:15.830 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.830 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.830 TEST_HEADER include/spdk/opal_spec.h 00:03:15.830 TEST_HEADER include/spdk/pipe.h 00:03:15.830 TEST_HEADER include/spdk/pci_ids.h 00:03:15.830 TEST_HEADER include/spdk/reduce.h 00:03:15.830 TEST_HEADER include/spdk/queue.h 00:03:15.830 TEST_HEADER include/spdk/rpc.h 00:03:15.830 TEST_HEADER include/spdk/scheduler.h 00:03:15.830 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.830 TEST_HEADER include/spdk/scsi.h 00:03:15.830 TEST_HEADER include/spdk/stdinc.h 00:03:15.830 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.830 TEST_HEADER include/spdk/sock.h 00:03:15.830 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.830 TEST_HEADER include/spdk/thread.h 00:03:15.830 TEST_HEADER include/spdk/trace.h 00:03:15.830 TEST_HEADER include/spdk/trace_parser.h 00:03:15.830 TEST_HEADER include/spdk/ublk.h 00:03:15.830 TEST_HEADER include/spdk/util.h 00:03:15.830 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.830 TEST_HEADER include/spdk/version.h 00:03:15.830 TEST_HEADER include/spdk/string.h 00:03:15.830 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:15.830 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.830 TEST_HEADER include/spdk/vmd.h 00:03:15.830 TEST_HEADER include/spdk/vhost.h 00:03:15.830 TEST_HEADER include/spdk/tree.h 00:03:15.830 TEST_HEADER include/spdk/zipf.h 00:03:15.830 TEST_HEADER include/spdk/uuid.h 00:03:15.830 TEST_HEADER include/spdk/xor.h 00:03:15.830 CXX test/cpp_headers/assert.o 00:03:15.830 CXX test/cpp_headers/accel_module.o 00:03:15.830 CXX test/cpp_headers/base64.o 00:03:15.830 CXX test/cpp_headers/bdev.o 00:03:15.830 CXX test/cpp_headers/bdev_module.o 00:03:15.830 CXX test/cpp_headers/bit_array.o 00:03:15.830 CXX test/cpp_headers/bit_pool.o 00:03:15.830 CXX test/cpp_headers/barrier.o 00:03:15.830 CXX test/cpp_headers/accel.o 00:03:15.830 CXX test/cpp_headers/blob_bdev.o 00:03:15.830 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.830 CXX test/cpp_headers/bdev_zone.o 00:03:15.830 CXX test/cpp_headers/blobfs.o 00:03:15.830 CXX test/cpp_headers/config.o 00:03:15.830 CXX test/cpp_headers/cpuset.o 00:03:16.121 CXX test/cpp_headers/crc16.o 00:03:16.121 CXX test/cpp_headers/blob.o 00:03:16.121 CXX test/cpp_headers/conf.o 00:03:16.121 CXX test/cpp_headers/env_dpdk.o 00:03:16.121 CXX test/cpp_headers/crc32.o 00:03:16.121 CXX test/cpp_headers/crc64.o 00:03:16.121 CXX test/cpp_headers/event.o 00:03:16.121 CXX test/cpp_headers/dif.o 00:03:16.121 CXX test/cpp_headers/fd_group.o 00:03:16.121 CXX test/cpp_headers/dma.o 00:03:16.121 CXX test/cpp_headers/env.o 00:03:16.121 CXX test/cpp_headers/endian.o 00:03:16.121 CXX test/cpp_headers/ftl.o 00:03:16.121 CXX test/cpp_headers/hexlify.o 00:03:16.121 CXX test/cpp_headers/fd.o 00:03:16.121 CXX test/cpp_headers/idxd.o 00:03:16.121 CXX test/cpp_headers/gpt_spec.o 00:03:16.121 CXX test/cpp_headers/file.o 00:03:16.121 CXX test/cpp_headers/idxd_spec.o 00:03:16.121 CXX test/cpp_headers/ioat.o 00:03:16.121 CXX test/cpp_headers/ioat_spec.o 00:03:16.121 CXX test/cpp_headers/histogram_data.o 00:03:16.121 CXX test/cpp_headers/init.o 00:03:16.121 CXX test/cpp_headers/jsonrpc.o 00:03:16.121 CXX test/cpp_headers/keyring.o 00:03:16.121 CXX test/cpp_headers/iscsi_spec.o 00:03:16.121 CXX test/cpp_headers/json.o 00:03:16.121 CXX test/cpp_headers/memory.o 00:03:16.121 CXX test/cpp_headers/likely.o 00:03:16.121 CXX test/cpp_headers/keyring_module.o 00:03:16.121 CXX test/cpp_headers/log.o 00:03:16.121 CXX test/cpp_headers/notify.o 00:03:16.121 CC test/event/app_repeat/app_repeat.o 00:03:16.121 CXX test/cpp_headers/lvol.o 00:03:16.121 CXX test/cpp_headers/nvme.o 00:03:16.121 CXX test/cpp_headers/mmio.o 00:03:16.121 CC test/env/memory/memory_ut.o 00:03:16.121 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.121 CXX test/cpp_headers/nbd.o 00:03:16.121 CC test/app/jsoncat/jsoncat.o 00:03:16.121 CXX test/cpp_headers/nvme_spec.o 00:03:16.121 CXX test/cpp_headers/nvme_zns.o 00:03:16.121 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.121 CXX test/cpp_headers/nvmf.o 00:03:16.121 CXX test/cpp_headers/nvmf_spec.o 00:03:16.121 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.121 CXX test/cpp_headers/nvme_intel.o 00:03:16.121 CC examples/sock/hello_world/hello_sock.o 00:03:16.121 CXX test/cpp_headers/nvmf_transport.o 00:03:16.121 CXX test/cpp_headers/opal.o 00:03:16.121 CXX test/cpp_headers/opal_spec.o 00:03:16.121 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.121 CXX test/cpp_headers/pipe.o 00:03:16.121 CC test/event/event_perf/event_perf.o 00:03:16.121 CXX test/cpp_headers/queue.o 00:03:16.121 CXX test/cpp_headers/pci_ids.o 00:03:16.121 CC examples/nvme/arbitration/arbitration.o 00:03:16.121 CXX test/cpp_headers/scheduler.o 00:03:16.121 CXX test/cpp_headers/reduce.o 00:03:16.121 CXX test/cpp_headers/rpc.o 00:03:16.121 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.121 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.121 CC examples/blob/hello_world/hello_blob.o 00:03:16.121 CC examples/nvme/hello_world/hello_world.o 00:03:16.121 CC examples/nvmf/nvmf/nvmf.o 00:03:16.121 CC test/event/reactor_perf/reactor_perf.o 00:03:16.121 CC examples/nvme/abort/abort.o 00:03:16.121 CC test/app/stub/stub.o 00:03:16.121 CC test/env/pci/pci_ut.o 00:03:16.121 LINK spdk_lspci 00:03:16.121 CC examples/idxd/perf/perf.o 00:03:16.121 CC test/nvme/overhead/overhead.o 00:03:16.121 CC examples/ioat/perf/perf.o 00:03:16.121 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.121 CC test/env/vtophys/vtophys.o 00:03:16.121 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.121 CC examples/nvme/reconnect/reconnect.o 00:03:16.122 CC test/nvme/connect_stress/connect_stress.o 00:03:16.401 LINK spdk_trace_record 00:03:16.401 CC test/app/histogram_perf/histogram_perf.o 00:03:16.401 CC examples/accel/perf/accel_perf.o 00:03:16.401 CC test/nvme/sgl/sgl.o 00:03:16.401 CC test/app/bdev_svc/bdev_svc.o 00:03:16.401 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.401 CC test/nvme/fdp/fdp.o 00:03:16.401 CC test/event/reactor/reactor.o 00:03:16.401 CC test/nvme/aer/aer.o 00:03:16.401 LINK spdk_nvme_discover 00:03:16.401 CC examples/nvme/hotplug/hotplug.o 00:03:16.401 CC test/nvme/simple_copy/simple_copy.o 00:03:16.401 CC test/thread/poller_perf/poller_perf.o 00:03:16.401 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.401 LINK spdk_tgt 00:03:16.401 CC test/nvme/startup/startup.o 00:03:16.401 CC test/bdev/bdevio/bdevio.o 00:03:16.401 LINK nvmf_tgt 00:03:16.401 LINK rpc_client_test 00:03:16.401 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.401 CC examples/thread/thread/thread_ex.o 00:03:16.401 CC app/fio/nvme/fio_plugin.o 00:03:16.401 LINK app_repeat 00:03:16.401 LINK jsoncat 00:03:16.401 CC examples/ioat/verify/verify.o 00:03:16.401 CXX test/cpp_headers/scsi.o 00:03:16.401 CC test/blobfs/mkfs/mkfs.o 00:03:16.401 CXX test/cpp_headers/scsi_spec.o 00:03:16.401 CC examples/util/zipf/zipf.o 00:03:16.401 CC test/nvme/boot_partition/boot_partition.o 00:03:16.401 CXX test/cpp_headers/sock.o 00:03:16.401 CC examples/blob/cli/blobcli.o 00:03:16.401 CXX test/cpp_headers/stdinc.o 00:03:16.401 CC test/dma/test_dma/test_dma.o 00:03:16.401 CXX test/cpp_headers/string.o 00:03:16.664 CC examples/vmd/led/led.o 00:03:16.664 CXX test/cpp_headers/thread.o 00:03:16.664 CC test/nvme/cuse/cuse.o 00:03:16.664 LINK iscsi_tgt 00:03:16.664 CC test/nvme/e2edp/nvme_dp.o 00:03:16.664 CC examples/util/tls_psk/tls_psk_print.o 00:03:16.664 CXX test/cpp_headers/trace.o 00:03:16.664 CXX test/cpp_headers/trace_parser.o 00:03:16.664 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.664 CXX test/cpp_headers/tree.o 00:03:16.664 LINK interrupt_tgt 00:03:16.664 CXX test/cpp_headers/ublk.o 00:03:16.664 CXX test/cpp_headers/util.o 00:03:16.664 LINK cmb_copy 00:03:16.664 CXX test/cpp_headers/uuid.o 00:03:16.664 CXX test/cpp_headers/version.o 00:03:16.664 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.664 LINK reactor_perf 00:03:16.664 CC test/nvme/reset/reset.o 00:03:16.664 CC test/nvme/reserve/reserve.o 00:03:16.664 LINK vhost 00:03:16.664 CC app/fio/bdev/fio_plugin.o 00:03:16.664 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.664 CC test/nvme/err_injection/err_injection.o 00:03:16.664 CXX test/cpp_headers/vhost.o 00:03:16.664 CXX test/cpp_headers/xor.o 00:03:16.664 CXX test/cpp_headers/vmd.o 00:03:16.664 CXX test/cpp_headers/zipf.o 00:03:16.664 LINK vtophys 00:03:16.664 LINK hello_sock 00:03:16.664 CC test/nvme/compliance/nvme_compliance.o 00:03:16.664 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.664 LINK connect_stress 00:03:16.664 LINK spdk_trace 00:03:16.664 LINK fused_ordering 00:03:16.664 LINK poller_perf 00:03:16.664 CC test/event/scheduler/scheduler.o 00:03:16.664 CC test/accel/dif/dif.o 00:03:16.664 LINK ioat_perf 00:03:16.664 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.664 LINK startup 00:03:16.664 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.664 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.664 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.664 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.664 CC test/lvol/esnap/esnap.o 00:03:16.664 LINK nvmf 00:03:16.664 LINK overhead 00:03:16.923 LINK boot_partition 00:03:16.923 LINK reconnect 00:03:16.923 LINK abort 00:03:16.923 LINK mkfs 00:03:16.923 LINK aer 00:03:16.923 LINK thread 00:03:16.923 LINK pmr_persistence 00:03:16.923 LINK reserve 00:03:16.923 LINK err_injection 00:03:16.923 LINK pci_ut 00:03:16.923 LINK nvme_compliance 00:03:16.923 LINK nvme_dp 00:03:16.923 LINK spdk_nvme_perf 00:03:16.923 LINK bdevio 00:03:16.923 LINK tls_psk_print 00:03:17.183 LINK env_dpdk_post_init 00:03:17.183 LINK nvme_manage 00:03:17.183 LINK lsvmd 00:03:17.183 LINK event_perf 00:03:17.183 LINK spdk_nvme_identify 00:03:17.183 LINK scheduler 00:03:17.183 LINK test_dma 00:03:17.183 LINK reactor 00:03:17.183 LINK histogram_perf 00:03:17.183 LINK led 00:03:17.183 LINK hello_bdev 00:03:17.183 LINK nvme_fuzz 00:03:17.183 LINK stub 00:03:17.183 LINK zipf 00:03:17.183 LINK spdk_top 00:03:17.183 LINK bdev_svc 00:03:17.183 LINK blobcli 00:03:17.183 LINK doorbell_aers 00:03:17.183 LINK hotplug 00:03:17.183 LINK dif 00:03:17.183 LINK hello_blob 00:03:17.183 LINK hello_world 00:03:17.183 LINK verify 00:03:17.183 LINK spdk_bdev 00:03:17.183 LINK idxd_perf 00:03:17.183 LINK simple_copy 00:03:17.183 LINK sgl 00:03:17.183 LINK vhost_fuzz 00:03:17.444 LINK bdevperf 00:03:17.444 LINK reset 00:03:17.444 LINK spdk_dd 00:03:17.444 LINK arbitration 00:03:17.444 LINK mem_callbacks 00:03:17.444 LINK fdp 00:03:17.444 LINK spdk_nvme 00:03:17.444 LINK accel_perf 00:03:17.705 LINK memory_ut 00:03:18.278 LINK cuse 00:03:18.278 LINK iscsi_fuzz 00:03:21.585 LINK esnap 00:03:21.585 00:03:21.585 real 0m50.358s 00:03:21.585 user 6m35.655s 00:03:21.585 sys 4m45.623s 00:03:21.585 16:11:48 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:21.585 16:11:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.585 ************************************ 00:03:21.585 END TEST make 00:03:21.585 ************************************ 00:03:21.585 16:11:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.585 16:11:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.585 16:11:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.586 16:11:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.586 16:11:48 -- pm/common@44 -- $ pid=2761068 00:03:21.586 16:11:48 -- pm/common@50 -- $ kill -TERM 2761068 00:03:21.586 16:11:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.586 16:11:48 -- pm/common@44 -- $ pid=2761069 00:03:21.586 16:11:48 -- pm/common@50 -- $ kill -TERM 2761069 00:03:21.586 16:11:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:21.586 16:11:48 -- pm/common@44 -- $ pid=2761071 00:03:21.586 16:11:48 -- pm/common@50 -- $ kill -TERM 2761071 00:03:21.586 16:11:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:21.586 16:11:48 -- pm/common@44 -- $ pid=2761088 00:03:21.586 16:11:48 -- pm/common@50 -- $ sudo -E kill -TERM 2761088 00:03:21.586 16:11:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:21.586 16:11:48 -- nvmf/common.sh@7 -- # uname -s 00:03:21.586 16:11:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.586 16:11:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.586 16:11:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.586 16:11:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.586 16:11:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.586 16:11:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.586 16:11:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.586 16:11:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.586 16:11:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.586 16:11:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.586 16:11:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:21.586 16:11:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:21.586 16:11:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.586 16:11:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.586 16:11:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:21.586 16:11:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.586 16:11:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:21.586 16:11:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.586 16:11:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.586 16:11:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.586 16:11:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.586 16:11:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.586 16:11:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.586 16:11:48 -- paths/export.sh@5 -- # export PATH 00:03:21.586 16:11:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.586 16:11:48 -- nvmf/common.sh@47 -- # : 0 00:03:21.586 16:11:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:21.586 16:11:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:21.586 16:11:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.586 16:11:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.586 16:11:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.586 16:11:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:21.586 16:11:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:21.586 16:11:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:21.586 16:11:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.586 16:11:48 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.586 16:11:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.586 16:11:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.586 16:11:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.586 16:11:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.586 16:11:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.586 16:11:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.586 16:11:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.586 16:11:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.586 16:11:48 -- spdk/autotest.sh@48 -- # udevadm_pid=2823826 00:03:21.586 16:11:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.586 16:11:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.586 16:11:48 -- pm/common@17 -- # local monitor 00:03:21.586 16:11:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@21 -- # date +%s 00:03:21.586 16:11:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.586 16:11:48 -- pm/common@21 -- # date +%s 00:03:21.586 16:11:48 -- pm/common@25 -- # sleep 1 00:03:21.586 16:11:48 -- pm/common@21 -- # date +%s 00:03:21.586 16:11:48 -- pm/common@21 -- # date +%s 00:03:21.586 16:11:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717769508 00:03:21.586 16:11:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717769508 00:03:21.586 16:11:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717769508 00:03:21.586 16:11:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717769508 00:03:21.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717769508_collect-vmstat.pm.log 00:03:21.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717769508_collect-cpu-load.pm.log 00:03:21.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717769508_collect-cpu-temp.pm.log 00:03:21.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717769508_collect-bmc-pm.bmc.pm.log 00:03:22.529 16:11:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.529 16:11:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.529 16:11:49 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:22.529 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:03:22.529 16:11:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.529 16:11:49 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:22.529 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:03:22.791 16:11:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:22.791 16:11:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.791 16:11:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.791 16:11:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:22.791 16:11:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.791 16:11:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.791 16:11:49 -- common/autotest_common.sh@1454 -- # uname 00:03:22.791 16:11:49 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:22.791 16:11:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.791 16:11:49 -- common/autotest_common.sh@1474 -- # uname 00:03:22.791 16:11:49 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:22.791 16:11:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:22.791 16:11:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:22.791 16:11:49 -- spdk/autotest.sh@72 -- # hash lcov 00:03:22.791 16:11:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:22.791 16:11:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:22.791 --rc lcov_branch_coverage=1 00:03:22.791 --rc lcov_function_coverage=1 00:03:22.791 --rc genhtml_branch_coverage=1 00:03:22.791 --rc genhtml_function_coverage=1 00:03:22.791 --rc genhtml_legend=1 00:03:22.791 --rc geninfo_all_blocks=1 00:03:22.791 ' 00:03:22.791 16:11:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:22.791 --rc lcov_branch_coverage=1 00:03:22.791 --rc lcov_function_coverage=1 00:03:22.791 --rc genhtml_branch_coverage=1 00:03:22.791 --rc genhtml_function_coverage=1 00:03:22.791 --rc genhtml_legend=1 00:03:22.791 --rc geninfo_all_blocks=1 00:03:22.792 ' 00:03:22.792 16:11:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:22.792 --rc lcov_branch_coverage=1 00:03:22.792 --rc lcov_function_coverage=1 00:03:22.792 --rc genhtml_branch_coverage=1 00:03:22.792 --rc genhtml_function_coverage=1 00:03:22.792 --rc genhtml_legend=1 00:03:22.792 --rc geninfo_all_blocks=1 00:03:22.792 --no-external' 00:03:22.792 16:11:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:22.792 --rc lcov_branch_coverage=1 00:03:22.792 --rc lcov_function_coverage=1 00:03:22.792 --rc genhtml_branch_coverage=1 00:03:22.792 --rc genhtml_function_coverage=1 00:03:22.792 --rc genhtml_legend=1 00:03:22.792 --rc geninfo_all_blocks=1 00:03:22.792 --no-external' 00:03:22.792 16:11:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:22.792 lcov: LCOV version 1.14 00:03:22.792 16:11:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:32.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:50.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:50.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:50.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:50.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:50.974 16:12:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:50.974 16:12:17 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:50.974 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:03:50.974 16:12:17 -- spdk/autotest.sh@91 -- # rm -f 00:03:50.974 16:12:17 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.279 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:54.279 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:54.279 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:54.540 16:12:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:54.540 16:12:21 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:54.540 16:12:21 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:54.540 16:12:21 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:54.540 16:12:21 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:54.540 16:12:21 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:54.540 16:12:21 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:54.540 16:12:21 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.540 16:12:21 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:54.540 16:12:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:54.540 16:12:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.540 16:12:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:54.540 16:12:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:54.540 16:12:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:54.540 16:12:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.540 No valid GPT data, bailing 00:03:54.540 16:12:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.540 16:12:21 -- scripts/common.sh@391 -- # pt= 00:03:54.540 16:12:21 -- scripts/common.sh@392 -- # return 1 00:03:54.540 16:12:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.540 1+0 records in 00:03:54.540 1+0 records out 00:03:54.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00147315 s, 712 MB/s 00:03:54.540 16:12:21 -- spdk/autotest.sh@118 -- # sync 00:03:54.540 16:12:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.540 16:12:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.540 16:12:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.732 16:12:28 -- spdk/autotest.sh@124 -- # uname -s 00:04:02.732 16:12:28 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:02.732 16:12:28 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:02.732 16:12:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:02.732 16:12:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:02.732 16:12:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.732 ************************************ 00:04:02.732 START TEST setup.sh 00:04:02.732 ************************************ 00:04:02.733 16:12:28 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:02.733 * Looking for test storage... 00:04:02.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.733 16:12:28 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:02.733 16:12:28 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:02.733 16:12:28 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:02.733 16:12:28 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:02.733 16:12:28 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:02.733 16:12:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.733 ************************************ 00:04:02.733 START TEST acl 00:04:02.733 ************************************ 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:02.733 * Looking for test storage... 00:04:02.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:02.733 16:12:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.733 16:12:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:02.733 16:12:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:02.733 16:12:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:02.733 16:12:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:02.733 16:12:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:02.733 16:12:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:02.733 16:12:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.733 16:12:29 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.939 16:12:33 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:06.939 16:12:33 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:06.939 16:12:33 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:06.939 16:12:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.939 16:12:33 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.939 16:12:33 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:09.559 Hugepages 00:04:09.559 node hugesize free / total 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 00:04:09.559 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:09.559 16:12:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:09.559 16:12:36 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:09.559 16:12:36 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:09.559 16:12:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:09.559 ************************************ 00:04:09.559 START TEST denied 00:04:09.559 ************************************ 00:04:09.559 16:12:36 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:09.559 16:12:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:09.559 16:12:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:09.559 16:12:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:09.559 16:12:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.559 16:12:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.767 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.767 16:12:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.055 00:04:19.055 real 0m8.622s 00:04:19.055 user 0m2.960s 00:04:19.055 sys 0m4.949s 00:04:19.055 16:12:44 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:19.055 16:12:44 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:19.055 ************************************ 00:04:19.055 END TEST denied 00:04:19.055 ************************************ 00:04:19.055 16:12:44 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:19.055 16:12:44 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:19.055 16:12:44 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:19.055 16:12:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.055 ************************************ 00:04:19.055 START TEST allowed 00:04:19.055 ************************************ 00:04:19.055 16:12:44 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:04:19.055 16:12:44 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:19.055 16:12:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.055 16:12:44 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:19.055 16:12:44 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.055 16:12:44 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.345 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:24.345 16:12:50 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:24.345 16:12:50 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:24.345 16:12:50 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:24.345 16:12:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.345 16:12:50 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.645 00:04:27.645 real 0m9.384s 00:04:27.645 user 0m2.632s 00:04:27.645 sys 0m4.927s 00:04:27.645 16:12:54 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:27.645 16:12:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:27.645 ************************************ 00:04:27.645 END TEST allowed 00:04:27.645 ************************************ 00:04:27.645 00:04:27.645 real 0m25.379s 00:04:27.645 user 0m8.282s 00:04:27.645 sys 0m14.733s 00:04:27.645 16:12:54 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:27.645 16:12:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.645 ************************************ 00:04:27.645 END TEST acl 00:04:27.645 ************************************ 00:04:27.645 16:12:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:27.645 16:12:54 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:27.645 16:12:54 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:27.645 16:12:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.645 ************************************ 00:04:27.645 START TEST hugepages 00:04:27.645 ************************************ 00:04:27.645 16:12:54 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:27.910 * Looking for test storage... 00:04:27.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 103266276 kB' 'MemAvailable: 106518224 kB' 'Buffers: 2704 kB' 'Cached: 14337012 kB' 'SwapCached: 0 kB' 'Active: 11365692 kB' 'Inactive: 3514408 kB' 'Active(anon): 10954048 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543784 kB' 'Mapped: 195936 kB' 'Shmem: 10413664 kB' 'KReclaimable: 300456 kB' 'Slab: 1138068 kB' 'SReclaimable: 300456 kB' 'SUnreclaim: 837612 kB' 'KernelStack: 27088 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 12408088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235284 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.910 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:27.911 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.912 16:12:54 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:27.912 16:12:54 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:27.912 16:12:54 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:27.912 16:12:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.912 ************************************ 00:04:27.912 START TEST default_setup 00:04:27.912 ************************************ 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.912 16:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.212 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:31.212 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:31.473 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105424168 kB' 'MemAvailable: 108676084 kB' 'Buffers: 2704 kB' 'Cached: 14337128 kB' 'SwapCached: 0 kB' 'Active: 11383104 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971460 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560544 kB' 'Mapped: 196652 kB' 'Shmem: 10413780 kB' 'KReclaimable: 300392 kB' 'Slab: 1135628 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835236 kB' 'KernelStack: 27152 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12421708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235172 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.740 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.741 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105426020 kB' 'MemAvailable: 108677936 kB' 'Buffers: 2704 kB' 'Cached: 14337132 kB' 'SwapCached: 0 kB' 'Active: 11382652 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971008 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560588 kB' 'Mapped: 196172 kB' 'Shmem: 10413784 kB' 'KReclaimable: 300392 kB' 'Slab: 1135604 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835212 kB' 'KernelStack: 27120 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12421860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.742 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.743 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105426996 kB' 'MemAvailable: 108678912 kB' 'Buffers: 2704 kB' 'Cached: 14337148 kB' 'SwapCached: 0 kB' 'Active: 11382736 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971092 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560636 kB' 'Mapped: 196172 kB' 'Shmem: 10413800 kB' 'KReclaimable: 300392 kB' 'Slab: 1135596 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835204 kB' 'KernelStack: 27136 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12421880 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.744 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.745 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.746 nr_hugepages=1024 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.746 resv_hugepages=0 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.746 surplus_hugepages=0 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.746 anon_hugepages=0 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105426996 kB' 'MemAvailable: 108678912 kB' 'Buffers: 2704 kB' 'Cached: 14337148 kB' 'SwapCached: 0 kB' 'Active: 11382736 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971092 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560636 kB' 'Mapped: 196172 kB' 'Shmem: 10413800 kB' 'KReclaimable: 300392 kB' 'Slab: 1135596 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835204 kB' 'KernelStack: 27136 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12421900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.746 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.747 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 49409716 kB' 'MemUsed: 16249292 kB' 'SwapCached: 0 kB' 'Active: 8233216 kB' 'Inactive: 3323324 kB' 'Active(anon): 8084104 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11318608 kB' 'Mapped: 101240 kB' 'AnonPages: 241184 kB' 'Shmem: 7846172 kB' 'KernelStack: 12408 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 675436 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 492524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.748 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.749 node0=1024 expecting 1024 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.749 00:04:31.749 real 0m3.885s 00:04:31.749 user 0m1.489s 00:04:31.749 sys 0m2.385s 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:31.749 16:12:58 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:31.749 ************************************ 00:04:31.749 END TEST default_setup 00:04:31.749 ************************************ 00:04:31.749 16:12:58 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:31.749 16:12:58 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:31.749 16:12:58 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:31.749 16:12:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.010 ************************************ 00:04:32.010 START TEST per_node_1G_alloc 00:04:32.010 ************************************ 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.010 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.011 16:12:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.348 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:35.348 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:35.348 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:35.349 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.615 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105480888 kB' 'MemAvailable: 108732804 kB' 'Buffers: 2704 kB' 'Cached: 14337288 kB' 'SwapCached: 0 kB' 'Active: 11383540 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971896 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560936 kB' 'Mapped: 195100 kB' 'Shmem: 10413940 kB' 'KReclaimable: 300392 kB' 'Slab: 1135468 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835076 kB' 'KernelStack: 27232 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.616 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105481216 kB' 'MemAvailable: 108733132 kB' 'Buffers: 2704 kB' 'Cached: 14337292 kB' 'SwapCached: 0 kB' 'Active: 11382308 kB' 'Inactive: 3514408 kB' 'Active(anon): 10970664 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559832 kB' 'Mapped: 194996 kB' 'Shmem: 10413944 kB' 'KReclaimable: 300392 kB' 'Slab: 1135356 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 834964 kB' 'KernelStack: 27152 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.617 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.618 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105484424 kB' 'MemAvailable: 108736340 kB' 'Buffers: 2704 kB' 'Cached: 14337308 kB' 'SwapCached: 0 kB' 'Active: 11383044 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971400 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560692 kB' 'Mapped: 194996 kB' 'Shmem: 10413960 kB' 'KReclaimable: 300392 kB' 'Slab: 1135324 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 834932 kB' 'KernelStack: 27168 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235396 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.619 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.620 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.621 nr_hugepages=1024 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.621 resv_hugepages=0 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.621 surplus_hugepages=0 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.621 anon_hugepages=0 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105483788 kB' 'MemAvailable: 108735704 kB' 'Buffers: 2704 kB' 'Cached: 14337332 kB' 'SwapCached: 0 kB' 'Active: 11382684 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971040 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560328 kB' 'Mapped: 194996 kB' 'Shmem: 10413984 kB' 'KReclaimable: 300392 kB' 'Slab: 1135324 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 834932 kB' 'KernelStack: 27184 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.621 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.622 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50492180 kB' 'MemUsed: 15166828 kB' 'SwapCached: 0 kB' 'Active: 8233308 kB' 'Inactive: 3323324 kB' 'Active(anon): 8084196 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11318680 kB' 'Mapped: 100468 kB' 'AnonPages: 241060 kB' 'Shmem: 7846244 kB' 'KernelStack: 12456 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 675348 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 492436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.623 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.624 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.625 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 54992516 kB' 'MemUsed: 5687324 kB' 'SwapCached: 0 kB' 'Active: 3149940 kB' 'Inactive: 191084 kB' 'Active(anon): 2887408 kB' 'Inactive(anon): 0 kB' 'Active(file): 262532 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3021400 kB' 'Mapped: 94528 kB' 'AnonPages: 319732 kB' 'Shmem: 2567784 kB' 'KernelStack: 14744 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117480 kB' 'Slab: 459976 kB' 'SReclaimable: 117480 kB' 'SUnreclaim: 342496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.626 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.627 node0=512 expecting 512 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:35.627 node1=512 expecting 512 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:35.627 00:04:35.627 real 0m3.811s 00:04:35.627 user 0m1.466s 00:04:35.627 sys 0m2.389s 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:35.627 16:13:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.627 ************************************ 00:04:35.627 END TEST per_node_1G_alloc 00:04:35.627 ************************************ 00:04:35.888 16:13:02 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:35.888 16:13:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:35.888 16:13:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:35.888 16:13:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.888 ************************************ 00:04:35.888 START TEST even_2G_alloc 00:04:35.888 ************************************ 00:04:35.888 16:13:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.889 16:13:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.189 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:39.189 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:39.189 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:39.455 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:39.455 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105479824 kB' 'MemAvailable: 108731740 kB' 'Buffers: 2704 kB' 'Cached: 14337472 kB' 'SwapCached: 0 kB' 'Active: 11384996 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973352 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562020 kB' 'Mapped: 195116 kB' 'Shmem: 10414124 kB' 'KReclaimable: 300392 kB' 'Slab: 1135468 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835076 kB' 'KernelStack: 27248 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.456 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105479456 kB' 'MemAvailable: 108731372 kB' 'Buffers: 2704 kB' 'Cached: 14337472 kB' 'SwapCached: 0 kB' 'Active: 11384728 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973084 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561664 kB' 'Mapped: 195088 kB' 'Shmem: 10414124 kB' 'KReclaimable: 300392 kB' 'Slab: 1135468 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835076 kB' 'KernelStack: 27232 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.457 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.458 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105478320 kB' 'MemAvailable: 108730236 kB' 'Buffers: 2704 kB' 'Cached: 14337488 kB' 'SwapCached: 0 kB' 'Active: 11383404 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971760 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560800 kB' 'Mapped: 195012 kB' 'Shmem: 10414140 kB' 'KReclaimable: 300392 kB' 'Slab: 1135488 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835096 kB' 'KernelStack: 27168 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12418948 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.459 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.460 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.461 nr_hugepages=1024 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.461 resv_hugepages=0 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.461 surplus_hugepages=0 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.461 anon_hugepages=0 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105478120 kB' 'MemAvailable: 108730036 kB' 'Buffers: 2704 kB' 'Cached: 14337516 kB' 'SwapCached: 0 kB' 'Active: 11383348 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971704 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560752 kB' 'Mapped: 195012 kB' 'Shmem: 10414168 kB' 'KReclaimable: 300392 kB' 'Slab: 1135488 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835096 kB' 'KernelStack: 27184 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12417368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.461 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.462 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50496032 kB' 'MemUsed: 15162976 kB' 'SwapCached: 0 kB' 'Active: 8231932 kB' 'Inactive: 3323324 kB' 'Active(anon): 8082820 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11318724 kB' 'Mapped: 100484 kB' 'AnonPages: 239644 kB' 'Shmem: 7846288 kB' 'KernelStack: 12360 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 675332 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 492420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.463 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.464 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 54983564 kB' 'MemUsed: 5696276 kB' 'SwapCached: 0 kB' 'Active: 3151400 kB' 'Inactive: 191084 kB' 'Active(anon): 2888868 kB' 'Inactive(anon): 0 kB' 'Active(file): 262532 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3021532 kB' 'Mapped: 94528 kB' 'AnonPages: 321104 kB' 'Shmem: 2567916 kB' 'KernelStack: 14696 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117480 kB' 'Slab: 460060 kB' 'SReclaimable: 117480 kB' 'SUnreclaim: 342580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.465 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.466 node0=512 expecting 512 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.466 node1=512 expecting 512 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.466 00:04:39.466 real 0m3.791s 00:04:39.466 user 0m1.533s 00:04:39.466 sys 0m2.323s 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:39.466 16:13:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.466 ************************************ 00:04:39.466 END TEST even_2G_alloc 00:04:39.466 ************************************ 00:04:39.727 16:13:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:39.727 16:13:06 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:39.727 16:13:06 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:39.727 16:13:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.727 ************************************ 00:04:39.727 START TEST odd_alloc 00:04:39.727 ************************************ 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.727 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.728 16:13:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.276 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:42.276 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:42.538 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:42.538 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.805 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105493568 kB' 'MemAvailable: 108745484 kB' 'Buffers: 2704 kB' 'Cached: 14337648 kB' 'SwapCached: 0 kB' 'Active: 11385364 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973720 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562740 kB' 'Mapped: 195104 kB' 'Shmem: 10414300 kB' 'KReclaimable: 300392 kB' 'Slab: 1135944 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835552 kB' 'KernelStack: 27264 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12420412 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.806 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105492624 kB' 'MemAvailable: 108744540 kB' 'Buffers: 2704 kB' 'Cached: 14337648 kB' 'SwapCached: 0 kB' 'Active: 11385560 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973916 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562876 kB' 'Mapped: 195104 kB' 'Shmem: 10414300 kB' 'KReclaimable: 300392 kB' 'Slab: 1135916 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835524 kB' 'KernelStack: 27264 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12420432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.807 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.808 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.809 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105493824 kB' 'MemAvailable: 108745740 kB' 'Buffers: 2704 kB' 'Cached: 14337652 kB' 'SwapCached: 0 kB' 'Active: 11384056 kB' 'Inactive: 3514408 kB' 'Active(anon): 10972412 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561840 kB' 'Mapped: 195020 kB' 'Shmem: 10414304 kB' 'KReclaimable: 300392 kB' 'Slab: 1135908 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835516 kB' 'KernelStack: 27184 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12420452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.810 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.811 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.812 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:42.813 nr_hugepages=1025 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.813 resv_hugepages=0 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.813 surplus_hugepages=0 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.813 anon_hugepages=0 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105493540 kB' 'MemAvailable: 108745456 kB' 'Buffers: 2704 kB' 'Cached: 14337688 kB' 'SwapCached: 0 kB' 'Active: 11384748 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973104 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561972 kB' 'Mapped: 195028 kB' 'Shmem: 10414340 kB' 'KReclaimable: 300392 kB' 'Slab: 1135908 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835516 kB' 'KernelStack: 27200 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12420472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.813 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50505520 kB' 'MemUsed: 15153488 kB' 'SwapCached: 0 kB' 'Active: 8232896 kB' 'Inactive: 3323324 kB' 'Active(anon): 8083784 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11318800 kB' 'Mapped: 100500 kB' 'AnonPages: 240584 kB' 'Shmem: 7846364 kB' 'KernelStack: 12360 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 676028 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 493116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.817 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.081 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.081 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.081 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.081 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.081 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.081 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 54987420 kB' 'MemUsed: 5692420 kB' 'SwapCached: 0 kB' 'Active: 3152024 kB' 'Inactive: 191084 kB' 'Active(anon): 2889492 kB' 'Inactive(anon): 0 kB' 'Active(file): 262532 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3021612 kB' 'Mapped: 94528 kB' 'AnonPages: 321556 kB' 'Shmem: 2567996 kB' 'KernelStack: 14840 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117480 kB' 'Slab: 459880 kB' 'SReclaimable: 117480 kB' 'SUnreclaim: 342400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:43.083 node0=512 expecting 513 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:43.083 node1=513 expecting 512 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:43.083 00:04:43.083 real 0m3.314s 00:04:43.083 user 0m1.191s 00:04:43.083 sys 0m2.039s 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.083 16:13:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.083 ************************************ 00:04:43.083 END TEST odd_alloc 00:04:43.083 ************************************ 00:04:43.083 16:13:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:43.083 16:13:09 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:43.083 16:13:09 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.083 16:13:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.083 ************************************ 00:04:43.083 START TEST custom_alloc 00:04:43.083 ************************************ 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.083 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.084 16:13:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.395 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:46.395 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104453656 kB' 'MemAvailable: 107705572 kB' 'Buffers: 2704 kB' 'Cached: 14337820 kB' 'SwapCached: 0 kB' 'Active: 11383456 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971812 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560676 kB' 'Mapped: 195076 kB' 'Shmem: 10414472 kB' 'KReclaimable: 300392 kB' 'Slab: 1135992 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835600 kB' 'KernelStack: 27040 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12418480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.395 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.396 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104453864 kB' 'MemAvailable: 107705780 kB' 'Buffers: 2704 kB' 'Cached: 14337828 kB' 'SwapCached: 0 kB' 'Active: 11384268 kB' 'Inactive: 3514408 kB' 'Active(anon): 10972624 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561444 kB' 'Mapped: 195052 kB' 'Shmem: 10414480 kB' 'KReclaimable: 300392 kB' 'Slab: 1135984 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835592 kB' 'KernelStack: 27088 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12418500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.397 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.398 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104454456 kB' 'MemAvailable: 107706372 kB' 'Buffers: 2704 kB' 'Cached: 14337840 kB' 'SwapCached: 0 kB' 'Active: 11383592 kB' 'Inactive: 3514408 kB' 'Active(anon): 10971948 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560724 kB' 'Mapped: 195052 kB' 'Shmem: 10414492 kB' 'KReclaimable: 300392 kB' 'Slab: 1136024 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835632 kB' 'KernelStack: 27072 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12418520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:46.401 nr_hugepages=1536 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.401 resv_hugepages=0 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.401 surplus_hugepages=0 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.401 anon_hugepages=0 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104453952 kB' 'MemAvailable: 107705868 kB' 'Buffers: 2704 kB' 'Cached: 14337840 kB' 'SwapCached: 0 kB' 'Active: 11384096 kB' 'Inactive: 3514408 kB' 'Active(anon): 10972452 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561228 kB' 'Mapped: 195052 kB' 'Shmem: 10414492 kB' 'KReclaimable: 300392 kB' 'Slab: 1136024 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 835632 kB' 'KernelStack: 27072 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12418540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.401 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.402 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.403 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50514944 kB' 'MemUsed: 15144064 kB' 'SwapCached: 0 kB' 'Active: 8231840 kB' 'Inactive: 3323324 kB' 'Active(anon): 8082728 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11318860 kB' 'Mapped: 100524 kB' 'AnonPages: 239484 kB' 'Shmem: 7846424 kB' 'KernelStack: 12360 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 676128 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 493216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.666 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53938628 kB' 'MemUsed: 6741212 kB' 'SwapCached: 0 kB' 'Active: 3152176 kB' 'Inactive: 191084 kB' 'Active(anon): 2889644 kB' 'Inactive(anon): 0 kB' 'Active(file): 262532 kB' 'Inactive(file): 191084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3021748 kB' 'Mapped: 94528 kB' 'AnonPages: 321624 kB' 'Shmem: 2568132 kB' 'KernelStack: 14728 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117480 kB' 'Slab: 459896 kB' 'SReclaimable: 117480 kB' 'SUnreclaim: 342416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.667 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.668 node0=512 expecting 512 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:46.668 node1=1024 expecting 1024 00:04:46.668 16:13:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:46.668 00:04:46.668 real 0m3.548s 00:04:46.668 user 0m1.302s 00:04:46.668 sys 0m2.235s 00:04:46.669 16:13:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:46.669 16:13:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.669 ************************************ 00:04:46.669 END TEST custom_alloc 00:04:46.669 ************************************ 00:04:46.669 16:13:13 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:46.669 16:13:13 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:46.669 16:13:13 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:46.669 16:13:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.669 ************************************ 00:04:46.669 START TEST no_shrink_alloc 00:04:46.669 ************************************ 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.669 16:13:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.967 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:49.967 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.967 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105503196 kB' 'MemAvailable: 108755112 kB' 'Buffers: 2704 kB' 'Cached: 14337996 kB' 'SwapCached: 0 kB' 'Active: 11385292 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973648 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561756 kB' 'Mapped: 195160 kB' 'Shmem: 10414648 kB' 'KReclaimable: 300392 kB' 'Slab: 1136656 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836264 kB' 'KernelStack: 27120 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12419068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.233 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.234 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105503796 kB' 'MemAvailable: 108755712 kB' 'Buffers: 2704 kB' 'Cached: 14338008 kB' 'SwapCached: 0 kB' 'Active: 11384804 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973160 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561828 kB' 'Mapped: 195068 kB' 'Shmem: 10414660 kB' 'KReclaimable: 300392 kB' 'Slab: 1136628 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836236 kB' 'KernelStack: 27072 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12419088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235396 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.235 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.236 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105504048 kB' 'MemAvailable: 108755964 kB' 'Buffers: 2704 kB' 'Cached: 14338036 kB' 'SwapCached: 0 kB' 'Active: 11384904 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973260 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561920 kB' 'Mapped: 195068 kB' 'Shmem: 10414688 kB' 'KReclaimable: 300392 kB' 'Slab: 1136628 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836236 kB' 'KernelStack: 27072 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12439584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.237 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.238 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.239 nr_hugepages=1024 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.239 resv_hugepages=0 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.239 surplus_hugepages=0 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.239 anon_hugepages=0 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105504088 kB' 'MemAvailable: 108756004 kB' 'Buffers: 2704 kB' 'Cached: 14338076 kB' 'SwapCached: 0 kB' 'Active: 11384680 kB' 'Inactive: 3514408 kB' 'Active(anon): 10973036 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561640 kB' 'Mapped: 195068 kB' 'Shmem: 10414728 kB' 'KReclaimable: 300392 kB' 'Slab: 1136628 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836236 kB' 'KernelStack: 27072 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12419636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.239 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.240 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 49468284 kB' 'MemUsed: 16190724 kB' 'SwapCached: 0 kB' 'Active: 8234432 kB' 'Inactive: 3323324 kB' 'Active(anon): 8085320 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11318948 kB' 'Mapped: 100540 kB' 'AnonPages: 242076 kB' 'Shmem: 7846512 kB' 'KernelStack: 12360 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 676624 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 493712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.241 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.242 node0=1024 expecting 1024 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.242 16:13:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.549 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:53.549 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:53.549 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:53.825 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.825 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105485324 kB' 'MemAvailable: 108737240 kB' 'Buffers: 2704 kB' 'Cached: 14338168 kB' 'SwapCached: 0 kB' 'Active: 11387080 kB' 'Inactive: 3514408 kB' 'Active(anon): 10975436 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563616 kB' 'Mapped: 195160 kB' 'Shmem: 10414820 kB' 'KReclaimable: 300392 kB' 'Slab: 1136644 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836252 kB' 'KernelStack: 27120 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12420572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235284 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.826 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105486092 kB' 'MemAvailable: 108738008 kB' 'Buffers: 2704 kB' 'Cached: 14338172 kB' 'SwapCached: 0 kB' 'Active: 11386736 kB' 'Inactive: 3514408 kB' 'Active(anon): 10975092 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563796 kB' 'Mapped: 195080 kB' 'Shmem: 10414824 kB' 'KReclaimable: 300392 kB' 'Slab: 1136652 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836260 kB' 'KernelStack: 27104 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12420588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235268 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.827 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.828 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105486784 kB' 'MemAvailable: 108738700 kB' 'Buffers: 2704 kB' 'Cached: 14338176 kB' 'SwapCached: 0 kB' 'Active: 11386396 kB' 'Inactive: 3514408 kB' 'Active(anon): 10974752 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563452 kB' 'Mapped: 195080 kB' 'Shmem: 10414828 kB' 'KReclaimable: 300392 kB' 'Slab: 1136652 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836260 kB' 'KernelStack: 27088 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12420612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235268 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.829 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.830 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.831 nr_hugepages=1024 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.831 resv_hugepages=0 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.831 surplus_hugepages=0 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.831 anon_hugepages=0 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105487140 kB' 'MemAvailable: 108739056 kB' 'Buffers: 2704 kB' 'Cached: 14338212 kB' 'SwapCached: 0 kB' 'Active: 11386776 kB' 'Inactive: 3514408 kB' 'Active(anon): 10975132 kB' 'Inactive(anon): 0 kB' 'Active(file): 411644 kB' 'Inactive(file): 3514408 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563792 kB' 'Mapped: 195080 kB' 'Shmem: 10414864 kB' 'KReclaimable: 300392 kB' 'Slab: 1136652 kB' 'SReclaimable: 300392 kB' 'SUnreclaim: 836260 kB' 'KernelStack: 27104 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12420632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235268 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4150644 kB' 'DirectMap2M: 29083648 kB' 'DirectMap1G: 102760448 kB' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.831 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.832 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.833 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.096 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 49450652 kB' 'MemUsed: 16208356 kB' 'SwapCached: 0 kB' 'Active: 8235656 kB' 'Inactive: 3323324 kB' 'Active(anon): 8086544 kB' 'Inactive(anon): 0 kB' 'Active(file): 149112 kB' 'Inactive(file): 3323324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11319000 kB' 'Mapped: 100552 kB' 'AnonPages: 243324 kB' 'Shmem: 7846564 kB' 'KernelStack: 12376 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 182912 kB' 'Slab: 676600 kB' 'SReclaimable: 182912 kB' 'SUnreclaim: 493688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.097 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.098 node0=1024 expecting 1024 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.098 00:04:54.098 real 0m7.340s 00:04:54.098 user 0m2.870s 00:04:54.098 sys 0m4.563s 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.098 16:13:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.098 ************************************ 00:04:54.098 END TEST no_shrink_alloc 00:04:54.098 ************************************ 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.098 16:13:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.098 00:04:54.098 real 0m26.275s 00:04:54.098 user 0m10.087s 00:04:54.098 sys 0m16.315s 00:04:54.098 16:13:20 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.098 16:13:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.098 ************************************ 00:04:54.098 END TEST hugepages 00:04:54.098 ************************************ 00:04:54.098 16:13:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:54.098 16:13:20 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:54.098 16:13:20 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.098 16:13:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:54.098 ************************************ 00:04:54.098 START TEST driver 00:04:54.098 ************************************ 00:04:54.098 16:13:20 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:54.098 * Looking for test storage... 00:04:54.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.098 16:13:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:54.098 16:13:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.098 16:13:20 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.387 16:13:25 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:59.387 16:13:25 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:59.387 16:13:25 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:59.387 16:13:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.387 ************************************ 00:04:59.387 START TEST guess_driver 00:04:59.387 ************************************ 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:59.387 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:59.387 Looking for driver=vfio-pci 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.387 16:13:25 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.708 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.974 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:02.974 16:13:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:02.974 16:13:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.974 16:13:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.265 00:05:08.265 real 0m8.672s 00:05:08.265 user 0m2.825s 00:05:08.265 sys 0m5.024s 00:05:08.265 16:13:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.265 16:13:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.265 ************************************ 00:05:08.265 END TEST guess_driver 00:05:08.265 ************************************ 00:05:08.265 00:05:08.265 real 0m13.709s 00:05:08.265 user 0m4.312s 00:05:08.265 sys 0m7.787s 00:05:08.265 16:13:34 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.265 16:13:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.266 ************************************ 00:05:08.266 END TEST driver 00:05:08.266 ************************************ 00:05:08.266 16:13:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:08.266 16:13:34 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:08.266 16:13:34 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:08.266 16:13:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.266 ************************************ 00:05:08.266 START TEST devices 00:05:08.266 ************************************ 00:05:08.266 16:13:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:08.266 * Looking for test storage... 00:05:08.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:08.266 16:13:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:08.266 16:13:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:08.266 16:13:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.266 16:13:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.472 16:13:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.472 16:13:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:12.472 16:13:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:12.472 16:13:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:12.472 16:13:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:12.472 16:13:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:12.473 16:13:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:12.473 16:13:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:12.473 No valid GPT data, bailing 00:05:12.473 16:13:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.473 16:13:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:12.473 16:13:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:12.473 16:13:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:12.473 16:13:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:12.473 16:13:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:12.473 16:13:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:12.473 16:13:38 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:12.473 16:13:38 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:12.473 16:13:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:12.473 ************************************ 00:05:12.473 START TEST nvme_mount 00:05:12.473 ************************************ 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:12.473 16:13:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:13.045 Creating new GPT entries in memory. 00:05:13.045 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:13.045 other utilities. 00:05:13.045 16:13:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:13.045 16:13:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.045 16:13:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.045 16:13:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.045 16:13:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.987 Creating new GPT entries in memory. 00:05:13.987 The operation has completed successfully. 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2864043 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.987 16:13:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.291 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.551 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:17.810 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.810 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.810 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.069 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:18.069 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:18.069 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:18.069 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:18.069 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.070 16:13:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:20.678 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.939 16:13:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:24.241 16:13:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.502 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.502 00:05:24.502 real 0m12.601s 00:05:24.502 user 0m3.575s 00:05:24.502 sys 0m6.718s 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.502 16:13:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:24.502 ************************************ 00:05:24.502 END TEST nvme_mount 00:05:24.502 ************************************ 00:05:24.502 16:13:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:24.502 16:13:51 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.502 16:13:51 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.502 16:13:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:24.502 ************************************ 00:05:24.502 START TEST dm_mount 00:05:24.502 ************************************ 00:05:24.502 16:13:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:05:24.502 16:13:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:24.502 16:13:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:24.502 16:13:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:24.502 16:13:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:24.502 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:24.503 16:13:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:25.886 Creating new GPT entries in memory. 00:05:25.886 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:25.886 other utilities. 00:05:25.886 16:13:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:25.886 16:13:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.886 16:13:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.886 16:13:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.886 16:13:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:26.829 Creating new GPT entries in memory. 00:05:26.829 The operation has completed successfully. 00:05:26.829 16:13:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:26.829 16:13:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.829 16:13:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.829 16:13:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.829 16:13:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:27.771 The operation has completed successfully. 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2868899 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:27.771 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.772 16:13:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.075 16:13:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.338 16:13:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.642 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.642 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:34.643 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.904 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:35.166 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.166 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:35.166 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.166 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.166 16:14:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:35.166 00:05:35.166 real 0m10.451s 00:05:35.166 user 0m2.712s 00:05:35.166 sys 0m4.738s 00:05:35.166 16:14:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.166 16:14:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.166 ************************************ 00:05:35.166 END TEST dm_mount 00:05:35.166 ************************************ 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.166 16:14:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.428 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:35.428 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:35.428 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.428 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.428 16:14:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:35.428 00:05:35.428 real 0m27.494s 00:05:35.428 user 0m7.882s 00:05:35.428 sys 0m14.154s 00:05:35.428 16:14:02 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.428 16:14:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.428 ************************************ 00:05:35.428 END TEST devices 00:05:35.428 ************************************ 00:05:35.428 00:05:35.428 real 1m33.278s 00:05:35.428 user 0m30.721s 00:05:35.428 sys 0m53.276s 00:05:35.428 16:14:02 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.428 16:14:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.428 ************************************ 00:05:35.428 END TEST setup.sh 00:05:35.428 ************************************ 00:05:35.428 16:14:02 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:38.732 Hugepages 00:05:38.732 node hugesize free / total 00:05:38.732 node0 1048576kB 0 / 0 00:05:38.732 node0 2048kB 2048 / 2048 00:05:38.732 node1 1048576kB 0 / 0 00:05:38.732 node1 2048kB 0 / 0 00:05:38.732 00:05:38.732 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.732 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:38.732 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:38.732 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:38.732 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:38.732 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:38.732 16:14:05 -- spdk/autotest.sh@130 -- # uname -s 00:05:38.732 16:14:05 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:38.732 16:14:05 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:38.732 16:14:05 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:42.044 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:42.044 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:44.016 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:44.016 16:14:10 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:45.402 16:14:11 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:45.402 16:14:11 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:45.402 16:14:11 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:45.402 16:14:11 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:45.402 16:14:11 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:45.402 16:14:11 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:45.402 16:14:11 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.402 16:14:11 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:45.402 16:14:11 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:45.402 16:14:11 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:45.402 16:14:11 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:45.402 16:14:11 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:48.703 Waiting for block devices as requested 00:05:48.703 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:48.703 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:48.703 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:48.703 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:48.703 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:48.703 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:48.964 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:48.964 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:48.964 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:49.225 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:49.225 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:49.225 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:49.485 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:49.485 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:49.485 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:49.745 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:49.745 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:50.006 16:14:16 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:50.006 16:14:16 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:05:50.006 16:14:16 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:50.006 16:14:16 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:50.006 16:14:16 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:50.006 16:14:16 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:50.006 16:14:16 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:05:50.006 16:14:16 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:50.006 16:14:16 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:50.006 16:14:16 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:50.006 16:14:16 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:50.006 16:14:16 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:50.006 16:14:16 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:50.006 16:14:16 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:50.006 16:14:16 -- common/autotest_common.sh@1556 -- # continue 00:05:50.006 16:14:16 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:50.006 16:14:16 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:50.006 16:14:16 -- common/autotest_common.sh@10 -- # set +x 00:05:50.006 16:14:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:50.006 16:14:16 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:50.006 16:14:16 -- common/autotest_common.sh@10 -- # set +x 00:05:50.006 16:14:16 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:53.306 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:53.306 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:53.306 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:53.306 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:53.306 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:53.306 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:53.307 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:53.307 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:53.307 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:53.566 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:53.827 16:14:20 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:53.827 16:14:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:53.827 16:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:53.827 16:14:20 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:53.827 16:14:20 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:53.827 16:14:20 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:53.827 16:14:20 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:53.827 16:14:20 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:53.827 16:14:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:53.827 16:14:20 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:53.827 16:14:20 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:53.827 16:14:20 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:53.827 16:14:20 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:53.827 16:14:20 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:54.088 16:14:20 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:54.088 16:14:20 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:54.088 16:14:20 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:54.088 16:14:20 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:54.088 16:14:20 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:05:54.088 16:14:20 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:54.088 16:14:20 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:05:54.088 16:14:20 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:05:54.088 16:14:20 -- common/autotest_common.sh@1592 -- # return 0 00:05:54.088 16:14:20 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:54.088 16:14:20 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:54.088 16:14:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:54.088 16:14:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:54.088 16:14:20 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:54.088 16:14:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:54.088 16:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.088 16:14:20 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:54.088 16:14:20 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:54.088 16:14:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.088 16:14:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.088 16:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.088 ************************************ 00:05:54.088 START TEST env 00:05:54.088 ************************************ 00:05:54.088 16:14:20 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:54.088 * Looking for test storage... 00:05:54.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:54.088 16:14:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:54.088 16:14:20 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.088 16:14:20 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.088 16:14:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.088 ************************************ 00:05:54.088 START TEST env_memory 00:05:54.088 ************************************ 00:05:54.088 16:14:20 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:54.088 00:05:54.088 00:05:54.088 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.088 http://cunit.sourceforge.net/ 00:05:54.088 00:05:54.088 00:05:54.088 Suite: memory 00:05:54.350 Test: alloc and free memory map ...[2024-06-07 16:14:20.946129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:54.350 passed 00:05:54.350 Test: mem map translation ...[2024-06-07 16:14:20.971513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:54.350 [2024-06-07 16:14:20.971532] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:54.350 [2024-06-07 16:14:20.971577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:54.350 [2024-06-07 16:14:20.971584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:54.350 passed 00:05:54.350 Test: mem map registration ...[2024-06-07 16:14:21.026669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:54.350 [2024-06-07 16:14:21.026684] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:54.350 passed 00:05:54.350 Test: mem map adjacent registrations ...passed 00:05:54.350 00:05:54.350 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.350 suites 1 1 n/a 0 0 00:05:54.350 tests 4 4 4 0 0 00:05:54.350 asserts 152 152 152 0 n/a 00:05:54.350 00:05:54.350 Elapsed time = 0.194 seconds 00:05:54.350 00:05:54.350 real 0m0.208s 00:05:54.350 user 0m0.195s 00:05:54.350 sys 0m0.012s 00:05:54.350 16:14:21 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.350 16:14:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:54.350 ************************************ 00:05:54.350 END TEST env_memory 00:05:54.350 ************************************ 00:05:54.350 16:14:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:54.350 16:14:21 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.350 16:14:21 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.350 16:14:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.350 ************************************ 00:05:54.350 START TEST env_vtophys 00:05:54.350 ************************************ 00:05:54.350 16:14:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:54.611 EAL: lib.eal log level changed from notice to debug 00:05:54.611 EAL: Detected lcore 0 as core 0 on socket 0 00:05:54.611 EAL: Detected lcore 1 as core 1 on socket 0 00:05:54.611 EAL: Detected lcore 2 as core 2 on socket 0 00:05:54.611 EAL: Detected lcore 3 as core 3 on socket 0 00:05:54.611 EAL: Detected lcore 4 as core 4 on socket 0 00:05:54.611 EAL: Detected lcore 5 as core 5 on socket 0 00:05:54.611 EAL: Detected lcore 6 as core 6 on socket 0 00:05:54.612 EAL: Detected lcore 7 as core 7 on socket 0 00:05:54.612 EAL: Detected lcore 8 as core 8 on socket 0 00:05:54.612 EAL: Detected lcore 9 as core 9 on socket 0 00:05:54.612 EAL: Detected lcore 10 as core 10 on socket 0 00:05:54.612 EAL: Detected lcore 11 as core 11 on socket 0 00:05:54.612 EAL: Detected lcore 12 as core 12 on socket 0 00:05:54.612 EAL: Detected lcore 13 as core 13 on socket 0 00:05:54.612 EAL: Detected lcore 14 as core 14 on socket 0 00:05:54.612 EAL: Detected lcore 15 as core 15 on socket 0 00:05:54.612 EAL: Detected lcore 16 as core 16 on socket 0 00:05:54.612 EAL: Detected lcore 17 as core 17 on socket 0 00:05:54.612 EAL: Detected lcore 18 as core 18 on socket 0 00:05:54.612 EAL: Detected lcore 19 as core 19 on socket 0 00:05:54.612 EAL: Detected lcore 20 as core 20 on socket 0 00:05:54.612 EAL: Detected lcore 21 as core 21 on socket 0 00:05:54.612 EAL: Detected lcore 22 as core 22 on socket 0 00:05:54.612 EAL: Detected lcore 23 as core 23 on socket 0 00:05:54.612 EAL: Detected lcore 24 as core 24 on socket 0 00:05:54.612 EAL: Detected lcore 25 as core 25 on socket 0 00:05:54.612 EAL: Detected lcore 26 as core 26 on socket 0 00:05:54.612 EAL: Detected lcore 27 as core 27 on socket 0 00:05:54.612 EAL: Detected lcore 28 as core 28 on socket 0 00:05:54.612 EAL: Detected lcore 29 as core 29 on socket 0 00:05:54.612 EAL: Detected lcore 30 as core 30 on socket 0 00:05:54.612 EAL: Detected lcore 31 as core 31 on socket 0 00:05:54.612 EAL: Detected lcore 32 as core 32 on socket 0 00:05:54.612 EAL: Detected lcore 33 as core 33 on socket 0 00:05:54.612 EAL: Detected lcore 34 as core 34 on socket 0 00:05:54.612 EAL: Detected lcore 35 as core 35 on socket 0 00:05:54.612 EAL: Detected lcore 36 as core 0 on socket 1 00:05:54.612 EAL: Detected lcore 37 as core 1 on socket 1 00:05:54.612 EAL: Detected lcore 38 as core 2 on socket 1 00:05:54.612 EAL: Detected lcore 39 as core 3 on socket 1 00:05:54.612 EAL: Detected lcore 40 as core 4 on socket 1 00:05:54.612 EAL: Detected lcore 41 as core 5 on socket 1 00:05:54.612 EAL: Detected lcore 42 as core 6 on socket 1 00:05:54.612 EAL: Detected lcore 43 as core 7 on socket 1 00:05:54.612 EAL: Detected lcore 44 as core 8 on socket 1 00:05:54.612 EAL: Detected lcore 45 as core 9 on socket 1 00:05:54.612 EAL: Detected lcore 46 as core 10 on socket 1 00:05:54.612 EAL: Detected lcore 47 as core 11 on socket 1 00:05:54.612 EAL: Detected lcore 48 as core 12 on socket 1 00:05:54.612 EAL: Detected lcore 49 as core 13 on socket 1 00:05:54.612 EAL: Detected lcore 50 as core 14 on socket 1 00:05:54.612 EAL: Detected lcore 51 as core 15 on socket 1 00:05:54.612 EAL: Detected lcore 52 as core 16 on socket 1 00:05:54.612 EAL: Detected lcore 53 as core 17 on socket 1 00:05:54.612 EAL: Detected lcore 54 as core 18 on socket 1 00:05:54.612 EAL: Detected lcore 55 as core 19 on socket 1 00:05:54.612 EAL: Detected lcore 56 as core 20 on socket 1 00:05:54.612 EAL: Detected lcore 57 as core 21 on socket 1 00:05:54.612 EAL: Detected lcore 58 as core 22 on socket 1 00:05:54.612 EAL: Detected lcore 59 as core 23 on socket 1 00:05:54.612 EAL: Detected lcore 60 as core 24 on socket 1 00:05:54.612 EAL: Detected lcore 61 as core 25 on socket 1 00:05:54.612 EAL: Detected lcore 62 as core 26 on socket 1 00:05:54.612 EAL: Detected lcore 63 as core 27 on socket 1 00:05:54.612 EAL: Detected lcore 64 as core 28 on socket 1 00:05:54.612 EAL: Detected lcore 65 as core 29 on socket 1 00:05:54.612 EAL: Detected lcore 66 as core 30 on socket 1 00:05:54.612 EAL: Detected lcore 67 as core 31 on socket 1 00:05:54.612 EAL: Detected lcore 68 as core 32 on socket 1 00:05:54.612 EAL: Detected lcore 69 as core 33 on socket 1 00:05:54.612 EAL: Detected lcore 70 as core 34 on socket 1 00:05:54.612 EAL: Detected lcore 71 as core 35 on socket 1 00:05:54.612 EAL: Detected lcore 72 as core 0 on socket 0 00:05:54.612 EAL: Detected lcore 73 as core 1 on socket 0 00:05:54.612 EAL: Detected lcore 74 as core 2 on socket 0 00:05:54.612 EAL: Detected lcore 75 as core 3 on socket 0 00:05:54.612 EAL: Detected lcore 76 as core 4 on socket 0 00:05:54.612 EAL: Detected lcore 77 as core 5 on socket 0 00:05:54.612 EAL: Detected lcore 78 as core 6 on socket 0 00:05:54.612 EAL: Detected lcore 79 as core 7 on socket 0 00:05:54.612 EAL: Detected lcore 80 as core 8 on socket 0 00:05:54.612 EAL: Detected lcore 81 as core 9 on socket 0 00:05:54.612 EAL: Detected lcore 82 as core 10 on socket 0 00:05:54.612 EAL: Detected lcore 83 as core 11 on socket 0 00:05:54.612 EAL: Detected lcore 84 as core 12 on socket 0 00:05:54.612 EAL: Detected lcore 85 as core 13 on socket 0 00:05:54.612 EAL: Detected lcore 86 as core 14 on socket 0 00:05:54.612 EAL: Detected lcore 87 as core 15 on socket 0 00:05:54.612 EAL: Detected lcore 88 as core 16 on socket 0 00:05:54.612 EAL: Detected lcore 89 as core 17 on socket 0 00:05:54.612 EAL: Detected lcore 90 as core 18 on socket 0 00:05:54.612 EAL: Detected lcore 91 as core 19 on socket 0 00:05:54.612 EAL: Detected lcore 92 as core 20 on socket 0 00:05:54.612 EAL: Detected lcore 93 as core 21 on socket 0 00:05:54.612 EAL: Detected lcore 94 as core 22 on socket 0 00:05:54.612 EAL: Detected lcore 95 as core 23 on socket 0 00:05:54.612 EAL: Detected lcore 96 as core 24 on socket 0 00:05:54.612 EAL: Detected lcore 97 as core 25 on socket 0 00:05:54.612 EAL: Detected lcore 98 as core 26 on socket 0 00:05:54.612 EAL: Detected lcore 99 as core 27 on socket 0 00:05:54.612 EAL: Detected lcore 100 as core 28 on socket 0 00:05:54.612 EAL: Detected lcore 101 as core 29 on socket 0 00:05:54.612 EAL: Detected lcore 102 as core 30 on socket 0 00:05:54.612 EAL: Detected lcore 103 as core 31 on socket 0 00:05:54.612 EAL: Detected lcore 104 as core 32 on socket 0 00:05:54.612 EAL: Detected lcore 105 as core 33 on socket 0 00:05:54.612 EAL: Detected lcore 106 as core 34 on socket 0 00:05:54.612 EAL: Detected lcore 107 as core 35 on socket 0 00:05:54.612 EAL: Detected lcore 108 as core 0 on socket 1 00:05:54.612 EAL: Detected lcore 109 as core 1 on socket 1 00:05:54.612 EAL: Detected lcore 110 as core 2 on socket 1 00:05:54.612 EAL: Detected lcore 111 as core 3 on socket 1 00:05:54.612 EAL: Detected lcore 112 as core 4 on socket 1 00:05:54.612 EAL: Detected lcore 113 as core 5 on socket 1 00:05:54.612 EAL: Detected lcore 114 as core 6 on socket 1 00:05:54.612 EAL: Detected lcore 115 as core 7 on socket 1 00:05:54.612 EAL: Detected lcore 116 as core 8 on socket 1 00:05:54.612 EAL: Detected lcore 117 as core 9 on socket 1 00:05:54.612 EAL: Detected lcore 118 as core 10 on socket 1 00:05:54.612 EAL: Detected lcore 119 as core 11 on socket 1 00:05:54.612 EAL: Detected lcore 120 as core 12 on socket 1 00:05:54.612 EAL: Detected lcore 121 as core 13 on socket 1 00:05:54.612 EAL: Detected lcore 122 as core 14 on socket 1 00:05:54.612 EAL: Detected lcore 123 as core 15 on socket 1 00:05:54.612 EAL: Detected lcore 124 as core 16 on socket 1 00:05:54.612 EAL: Detected lcore 125 as core 17 on socket 1 00:05:54.612 EAL: Detected lcore 126 as core 18 on socket 1 00:05:54.612 EAL: Detected lcore 127 as core 19 on socket 1 00:05:54.612 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:54.612 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:54.612 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:54.612 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:54.612 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:54.612 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:54.612 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:54.612 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:54.612 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:54.612 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:54.612 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:54.612 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:54.612 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:54.612 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:54.612 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:54.612 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:54.612 EAL: Maximum logical cores by configuration: 128 00:05:54.612 EAL: Detected CPU lcores: 128 00:05:54.612 EAL: Detected NUMA nodes: 2 00:05:54.612 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:54.612 EAL: Detected shared linkage of DPDK 00:05:54.612 EAL: No shared files mode enabled, IPC will be disabled 00:05:54.612 EAL: Bus pci wants IOVA as 'DC' 00:05:54.612 EAL: Buses did not request a specific IOVA mode. 00:05:54.612 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:54.612 EAL: Selected IOVA mode 'VA' 00:05:54.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.612 EAL: Probing VFIO support... 00:05:54.612 EAL: IOMMU type 1 (Type 1) is supported 00:05:54.612 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:54.612 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:54.612 EAL: VFIO support initialized 00:05:54.612 EAL: Ask a virtual area of 0x2e000 bytes 00:05:54.612 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:54.612 EAL: Setting up physically contiguous memory... 00:05:54.612 EAL: Setting maximum number of open files to 524288 00:05:54.612 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:54.612 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:54.612 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:54.612 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.612 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:54.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.612 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.612 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:54.612 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:54.612 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.612 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:54.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.612 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.612 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:54.612 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:54.612 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.612 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:54.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.612 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.612 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:54.612 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:54.612 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.612 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:54.612 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.612 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.612 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:54.612 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:54.612 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:54.612 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.613 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:54.613 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.613 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.613 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:54.613 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:54.613 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.613 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:54.613 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.613 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.613 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:54.613 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:54.613 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.613 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:54.613 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.613 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.613 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:54.613 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:54.613 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.613 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:54.613 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.613 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.613 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:54.613 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:54.613 EAL: Hugepages will be freed exactly as allocated. 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: TSC frequency is ~2400000 KHz 00:05:54.613 EAL: Main lcore 0 is ready (tid=7f5573a76a00;cpuset=[0]) 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 0 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 2MB 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:54.613 EAL: Mem event callback 'spdk:(nil)' registered 00:05:54.613 00:05:54.613 00:05:54.613 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.613 http://cunit.sourceforge.net/ 00:05:54.613 00:05:54.613 00:05:54.613 Suite: components_suite 00:05:54.613 Test: vtophys_malloc_test ...passed 00:05:54.613 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 4MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 4MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 6MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 6MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 10MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 130MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 130MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.613 EAL: Restoring previous memory policy: 4 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was expanded by 258MB 00:05:54.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.613 EAL: request: mp_malloc_sync 00:05:54.613 EAL: No shared files mode enabled, IPC is disabled 00:05:54.613 EAL: Heap on socket 0 was shrunk by 258MB 00:05:54.613 EAL: Trying to obtain current memory policy. 00:05:54.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.874 EAL: Restoring previous memory policy: 4 00:05:54.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.874 EAL: request: mp_malloc_sync 00:05:54.874 EAL: No shared files mode enabled, IPC is disabled 00:05:54.874 EAL: Heap on socket 0 was expanded by 514MB 00:05:54.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.874 EAL: request: mp_malloc_sync 00:05:54.874 EAL: No shared files mode enabled, IPC is disabled 00:05:54.874 EAL: Heap on socket 0 was shrunk by 514MB 00:05:54.874 EAL: Trying to obtain current memory policy. 00:05:54.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.135 EAL: Restoring previous memory policy: 4 00:05:55.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.135 EAL: request: mp_malloc_sync 00:05:55.135 EAL: No shared files mode enabled, IPC is disabled 00:05:55.135 EAL: Heap on socket 0 was expanded by 1026MB 00:05:55.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.135 EAL: request: mp_malloc_sync 00:05:55.135 EAL: No shared files mode enabled, IPC is disabled 00:05:55.135 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:55.135 passed 00:05:55.135 00:05:55.135 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.135 suites 1 1 n/a 0 0 00:05:55.135 tests 2 2 2 0 0 00:05:55.135 asserts 497 497 497 0 n/a 00:05:55.135 00:05:55.135 Elapsed time = 0.658 seconds 00:05:55.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.135 EAL: request: mp_malloc_sync 00:05:55.135 EAL: No shared files mode enabled, IPC is disabled 00:05:55.135 EAL: Heap on socket 0 was shrunk by 2MB 00:05:55.135 EAL: No shared files mode enabled, IPC is disabled 00:05:55.135 EAL: No shared files mode enabled, IPC is disabled 00:05:55.135 EAL: No shared files mode enabled, IPC is disabled 00:05:55.135 00:05:55.135 real 0m0.782s 00:05:55.135 user 0m0.405s 00:05:55.135 sys 0m0.350s 00:05:55.135 16:14:21 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.135 16:14:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:55.135 ************************************ 00:05:55.135 END TEST env_vtophys 00:05:55.135 ************************************ 00:05:55.396 16:14:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:55.396 16:14:21 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:55.396 16:14:21 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.396 16:14:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.396 ************************************ 00:05:55.396 START TEST env_pci 00:05:55.396 ************************************ 00:05:55.396 16:14:22 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:55.396 00:05:55.396 00:05:55.396 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.396 http://cunit.sourceforge.net/ 00:05:55.396 00:05:55.396 00:05:55.396 Suite: pci 00:05:55.396 Test: pci_hook ...[2024-06-07 16:14:22.052304] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2879945 has claimed it 00:05:55.396 EAL: Cannot find device (10000:00:01.0) 00:05:55.396 EAL: Failed to attach device on primary process 00:05:55.396 passed 00:05:55.396 00:05:55.396 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.396 suites 1 1 n/a 0 0 00:05:55.396 tests 1 1 1 0 0 00:05:55.396 asserts 25 25 25 0 n/a 00:05:55.396 00:05:55.396 Elapsed time = 0.029 seconds 00:05:55.396 00:05:55.396 real 0m0.049s 00:05:55.396 user 0m0.015s 00:05:55.396 sys 0m0.034s 00:05:55.396 16:14:22 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.396 16:14:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:55.396 ************************************ 00:05:55.396 END TEST env_pci 00:05:55.396 ************************************ 00:05:55.396 16:14:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:55.396 16:14:22 env -- env/env.sh@15 -- # uname 00:05:55.396 16:14:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:55.396 16:14:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:55.396 16:14:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:55.396 16:14:22 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:55.396 16:14:22 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.396 16:14:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.396 ************************************ 00:05:55.396 START TEST env_dpdk_post_init 00:05:55.396 ************************************ 00:05:55.396 16:14:22 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:55.396 EAL: Detected CPU lcores: 128 00:05:55.396 EAL: Detected NUMA nodes: 2 00:05:55.396 EAL: Detected shared linkage of DPDK 00:05:55.396 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:55.396 EAL: Selected IOVA mode 'VA' 00:05:55.396 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.396 EAL: VFIO support initialized 00:05:55.396 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:55.656 EAL: Using IOMMU type 1 (Type 1) 00:05:55.656 EAL: Ignore mapping IO port bar(1) 00:05:55.656 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:55.917 EAL: Ignore mapping IO port bar(1) 00:05:55.917 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:56.177 EAL: Ignore mapping IO port bar(1) 00:05:56.177 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:56.438 EAL: Ignore mapping IO port bar(1) 00:05:56.438 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:56.438 EAL: Ignore mapping IO port bar(1) 00:05:56.697 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:56.697 EAL: Ignore mapping IO port bar(1) 00:05:56.957 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:56.957 EAL: Ignore mapping IO port bar(1) 00:05:57.218 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:57.218 EAL: Ignore mapping IO port bar(1) 00:05:57.218 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:57.478 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:57.739 EAL: Ignore mapping IO port bar(1) 00:05:57.739 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:58.000 EAL: Ignore mapping IO port bar(1) 00:05:58.000 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:58.261 EAL: Ignore mapping IO port bar(1) 00:05:58.261 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:58.261 EAL: Ignore mapping IO port bar(1) 00:05:58.521 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:58.521 EAL: Ignore mapping IO port bar(1) 00:05:58.781 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:58.781 EAL: Ignore mapping IO port bar(1) 00:05:58.781 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:59.041 EAL: Ignore mapping IO port bar(1) 00:05:59.041 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:59.302 EAL: Ignore mapping IO port bar(1) 00:05:59.302 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:59.302 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:59.302 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:59.563 Starting DPDK initialization... 00:05:59.563 Starting SPDK post initialization... 00:05:59.563 SPDK NVMe probe 00:05:59.563 Attaching to 0000:65:00.0 00:05:59.563 Attached to 0000:65:00.0 00:05:59.563 Cleaning up... 00:06:01.548 00:06:01.548 real 0m5.711s 00:06:01.548 user 0m0.177s 00:06:01.548 sys 0m0.077s 00:06:01.548 16:14:27 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.548 16:14:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.548 ************************************ 00:06:01.548 END TEST env_dpdk_post_init 00:06:01.548 ************************************ 00:06:01.548 16:14:27 env -- env/env.sh@26 -- # uname 00:06:01.548 16:14:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.548 16:14:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.548 16:14:27 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:01.548 16:14:27 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.548 16:14:27 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.548 ************************************ 00:06:01.548 START TEST env_mem_callbacks 00:06:01.548 ************************************ 00:06:01.548 16:14:27 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.548 EAL: Detected CPU lcores: 128 00:06:01.548 EAL: Detected NUMA nodes: 2 00:06:01.548 EAL: Detected shared linkage of DPDK 00:06:01.548 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.548 EAL: Selected IOVA mode 'VA' 00:06:01.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.548 EAL: VFIO support initialized 00:06:01.548 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:01.548 00:06:01.548 00:06:01.548 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.548 http://cunit.sourceforge.net/ 00:06:01.548 00:06:01.548 00:06:01.548 Suite: memory 00:06:01.548 Test: test ... 00:06:01.548 register 0x200000200000 2097152 00:06:01.548 malloc 3145728 00:06:01.548 register 0x200000400000 4194304 00:06:01.548 buf 0x200000500000 len 3145728 PASSED 00:06:01.548 malloc 64 00:06:01.548 buf 0x2000004fff40 len 64 PASSED 00:06:01.548 malloc 4194304 00:06:01.548 register 0x200000800000 6291456 00:06:01.548 buf 0x200000a00000 len 4194304 PASSED 00:06:01.548 free 0x200000500000 3145728 00:06:01.548 free 0x2000004fff40 64 00:06:01.548 unregister 0x200000400000 4194304 PASSED 00:06:01.548 free 0x200000a00000 4194304 00:06:01.548 unregister 0x200000800000 6291456 PASSED 00:06:01.548 malloc 8388608 00:06:01.548 register 0x200000400000 10485760 00:06:01.548 buf 0x200000600000 len 8388608 PASSED 00:06:01.548 free 0x200000600000 8388608 00:06:01.548 unregister 0x200000400000 10485760 PASSED 00:06:01.548 passed 00:06:01.548 00:06:01.548 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.548 suites 1 1 n/a 0 0 00:06:01.548 tests 1 1 1 0 0 00:06:01.548 asserts 15 15 15 0 n/a 00:06:01.548 00:06:01.548 Elapsed time = 0.006 seconds 00:06:01.548 00:06:01.548 real 0m0.058s 00:06:01.548 user 0m0.018s 00:06:01.548 sys 0m0.039s 00:06:01.548 16:14:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.548 16:14:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:01.548 ************************************ 00:06:01.548 END TEST env_mem_callbacks 00:06:01.548 ************************************ 00:06:01.548 00:06:01.548 real 0m7.289s 00:06:01.548 user 0m0.998s 00:06:01.548 sys 0m0.831s 00:06:01.548 16:14:28 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.548 16:14:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.548 ************************************ 00:06:01.548 END TEST env 00:06:01.548 ************************************ 00:06:01.548 16:14:28 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:01.548 16:14:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:01.548 16:14:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.548 16:14:28 -- common/autotest_common.sh@10 -- # set +x 00:06:01.548 ************************************ 00:06:01.548 START TEST rpc 00:06:01.548 ************************************ 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:01.548 * Looking for test storage... 00:06:01.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.548 16:14:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2881340 00:06:01.548 16:14:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.548 16:14:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:01.548 16:14:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2881340 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@830 -- # '[' -z 2881340 ']' 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.548 16:14:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.548 [2024-06-07 16:14:28.284768] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:01.548 [2024-06-07 16:14:28.284841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881340 ] 00:06:01.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.548 [2024-06-07 16:14:28.348523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.809 [2024-06-07 16:14:28.422799] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:01.809 [2024-06-07 16:14:28.422839] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2881340' to capture a snapshot of events at runtime. 00:06:01.809 [2024-06-07 16:14:28.422847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.809 [2024-06-07 16:14:28.422853] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.809 [2024-06-07 16:14:28.422859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2881340 for offline analysis/debug. 00:06:01.809 [2024-06-07 16:14:28.422879] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.380 16:14:29 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:02.380 16:14:29 rpc -- common/autotest_common.sh@863 -- # return 0 00:06:02.381 16:14:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:02.381 16:14:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:02.381 16:14:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:02.381 16:14:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:02.381 16:14:29 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.381 16:14:29 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.381 16:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 ************************************ 00:06:02.381 START TEST rpc_integrity 00:06:02.381 ************************************ 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.381 { 00:06:02.381 "name": "Malloc0", 00:06:02.381 "aliases": [ 00:06:02.381 "20e463a8-e2eb-43f3-b575-5ebb09afd691" 00:06:02.381 ], 00:06:02.381 "product_name": "Malloc disk", 00:06:02.381 "block_size": 512, 00:06:02.381 "num_blocks": 16384, 00:06:02.381 "uuid": "20e463a8-e2eb-43f3-b575-5ebb09afd691", 00:06:02.381 "assigned_rate_limits": { 00:06:02.381 "rw_ios_per_sec": 0, 00:06:02.381 "rw_mbytes_per_sec": 0, 00:06:02.381 "r_mbytes_per_sec": 0, 00:06:02.381 "w_mbytes_per_sec": 0 00:06:02.381 }, 00:06:02.381 "claimed": false, 00:06:02.381 "zoned": false, 00:06:02.381 "supported_io_types": { 00:06:02.381 "read": true, 00:06:02.381 "write": true, 00:06:02.381 "unmap": true, 00:06:02.381 "write_zeroes": true, 00:06:02.381 "flush": true, 00:06:02.381 "reset": true, 00:06:02.381 "compare": false, 00:06:02.381 "compare_and_write": false, 00:06:02.381 "abort": true, 00:06:02.381 "nvme_admin": false, 00:06:02.381 "nvme_io": false 00:06:02.381 }, 00:06:02.381 "memory_domains": [ 00:06:02.381 { 00:06:02.381 "dma_device_id": "system", 00:06:02.381 "dma_device_type": 1 00:06:02.381 }, 00:06:02.381 { 00:06:02.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.381 "dma_device_type": 2 00:06:02.381 } 00:06:02.381 ], 00:06:02.381 "driver_specific": {} 00:06:02.381 } 00:06:02.381 ]' 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 [2024-06-07 16:14:29.199639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:02.381 [2024-06-07 16:14:29.199670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.381 [2024-06-07 16:14:29.199682] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c8dbe0 00:06:02.381 [2024-06-07 16:14:29.199689] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.381 [2024-06-07 16:14:29.201008] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.381 [2024-06-07 16:14:29.201028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.381 Passthru0 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.381 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.381 { 00:06:02.381 "name": "Malloc0", 00:06:02.381 "aliases": [ 00:06:02.381 "20e463a8-e2eb-43f3-b575-5ebb09afd691" 00:06:02.381 ], 00:06:02.381 "product_name": "Malloc disk", 00:06:02.381 "block_size": 512, 00:06:02.381 "num_blocks": 16384, 00:06:02.381 "uuid": "20e463a8-e2eb-43f3-b575-5ebb09afd691", 00:06:02.381 "assigned_rate_limits": { 00:06:02.381 "rw_ios_per_sec": 0, 00:06:02.381 "rw_mbytes_per_sec": 0, 00:06:02.381 "r_mbytes_per_sec": 0, 00:06:02.381 "w_mbytes_per_sec": 0 00:06:02.381 }, 00:06:02.381 "claimed": true, 00:06:02.381 "claim_type": "exclusive_write", 00:06:02.381 "zoned": false, 00:06:02.381 "supported_io_types": { 00:06:02.381 "read": true, 00:06:02.381 "write": true, 00:06:02.381 "unmap": true, 00:06:02.381 "write_zeroes": true, 00:06:02.381 "flush": true, 00:06:02.381 "reset": true, 00:06:02.381 "compare": false, 00:06:02.381 "compare_and_write": false, 00:06:02.381 "abort": true, 00:06:02.381 "nvme_admin": false, 00:06:02.381 "nvme_io": false 00:06:02.381 }, 00:06:02.381 "memory_domains": [ 00:06:02.381 { 00:06:02.381 "dma_device_id": "system", 00:06:02.381 "dma_device_type": 1 00:06:02.381 }, 00:06:02.381 { 00:06:02.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.381 "dma_device_type": 2 00:06:02.381 } 00:06:02.381 ], 00:06:02.381 "driver_specific": {} 00:06:02.381 }, 00:06:02.381 { 00:06:02.381 "name": "Passthru0", 00:06:02.381 "aliases": [ 00:06:02.381 "011fd69c-3a62-5e26-b1c1-3e70a25d96d1" 00:06:02.381 ], 00:06:02.381 "product_name": "passthru", 00:06:02.381 "block_size": 512, 00:06:02.381 "num_blocks": 16384, 00:06:02.381 "uuid": "011fd69c-3a62-5e26-b1c1-3e70a25d96d1", 00:06:02.381 "assigned_rate_limits": { 00:06:02.381 "rw_ios_per_sec": 0, 00:06:02.381 "rw_mbytes_per_sec": 0, 00:06:02.381 "r_mbytes_per_sec": 0, 00:06:02.381 "w_mbytes_per_sec": 0 00:06:02.381 }, 00:06:02.381 "claimed": false, 00:06:02.381 "zoned": false, 00:06:02.381 "supported_io_types": { 00:06:02.381 "read": true, 00:06:02.381 "write": true, 00:06:02.381 "unmap": true, 00:06:02.381 "write_zeroes": true, 00:06:02.381 "flush": true, 00:06:02.381 "reset": true, 00:06:02.381 "compare": false, 00:06:02.381 "compare_and_write": false, 00:06:02.381 "abort": true, 00:06:02.381 "nvme_admin": false, 00:06:02.381 "nvme_io": false 00:06:02.381 }, 00:06:02.381 "memory_domains": [ 00:06:02.381 { 00:06:02.381 "dma_device_id": "system", 00:06:02.381 "dma_device_type": 1 00:06:02.381 }, 00:06:02.381 { 00:06:02.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.381 "dma_device_type": 2 00:06:02.381 } 00:06:02.381 ], 00:06:02.381 "driver_specific": { 00:06:02.381 "passthru": { 00:06:02.381 "name": "Passthru0", 00:06:02.381 "base_bdev_name": "Malloc0" 00:06:02.381 } 00:06:02.381 } 00:06:02.381 } 00:06:02.381 ]' 00:06:02.381 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.642 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.642 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.642 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.642 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.642 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.642 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:02.642 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.642 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.642 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.643 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.643 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.643 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.643 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.643 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.643 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.643 16:14:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.643 00:06:02.643 real 0m0.286s 00:06:02.643 user 0m0.186s 00:06:02.643 sys 0m0.037s 00:06:02.643 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.643 16:14:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.643 ************************************ 00:06:02.643 END TEST rpc_integrity 00:06:02.643 ************************************ 00:06:02.643 16:14:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:02.643 16:14:29 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.643 16:14:29 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.643 16:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.643 ************************************ 00:06:02.643 START TEST rpc_plugins 00:06:02.643 ************************************ 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:02.643 { 00:06:02.643 "name": "Malloc1", 00:06:02.643 "aliases": [ 00:06:02.643 "41392f39-13ee-4eb8-9e9b-9eb4e52e3762" 00:06:02.643 ], 00:06:02.643 "product_name": "Malloc disk", 00:06:02.643 "block_size": 4096, 00:06:02.643 "num_blocks": 256, 00:06:02.643 "uuid": "41392f39-13ee-4eb8-9e9b-9eb4e52e3762", 00:06:02.643 "assigned_rate_limits": { 00:06:02.643 "rw_ios_per_sec": 0, 00:06:02.643 "rw_mbytes_per_sec": 0, 00:06:02.643 "r_mbytes_per_sec": 0, 00:06:02.643 "w_mbytes_per_sec": 0 00:06:02.643 }, 00:06:02.643 "claimed": false, 00:06:02.643 "zoned": false, 00:06:02.643 "supported_io_types": { 00:06:02.643 "read": true, 00:06:02.643 "write": true, 00:06:02.643 "unmap": true, 00:06:02.643 "write_zeroes": true, 00:06:02.643 "flush": true, 00:06:02.643 "reset": true, 00:06:02.643 "compare": false, 00:06:02.643 "compare_and_write": false, 00:06:02.643 "abort": true, 00:06:02.643 "nvme_admin": false, 00:06:02.643 "nvme_io": false 00:06:02.643 }, 00:06:02.643 "memory_domains": [ 00:06:02.643 { 00:06:02.643 "dma_device_id": "system", 00:06:02.643 "dma_device_type": 1 00:06:02.643 }, 00:06:02.643 { 00:06:02.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.643 "dma_device_type": 2 00:06:02.643 } 00:06:02.643 ], 00:06:02.643 "driver_specific": {} 00:06:02.643 } 00:06:02.643 ]' 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:02.643 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.643 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.904 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.904 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:02.904 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.904 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.904 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.904 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:02.904 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:02.904 16:14:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:02.904 00:06:02.904 real 0m0.146s 00:06:02.904 user 0m0.096s 00:06:02.904 sys 0m0.015s 00:06:02.904 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.904 16:14:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.904 ************************************ 00:06:02.904 END TEST rpc_plugins 00:06:02.904 ************************************ 00:06:02.904 16:14:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:02.904 16:14:29 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.904 16:14:29 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.904 16:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.904 ************************************ 00:06:02.904 START TEST rpc_trace_cmd_test 00:06:02.904 ************************************ 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.904 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2881340", 00:06:02.904 "tpoint_group_mask": "0x8", 00:06:02.904 "iscsi_conn": { 00:06:02.904 "mask": "0x2", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "scsi": { 00:06:02.904 "mask": "0x4", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "bdev": { 00:06:02.904 "mask": "0x8", 00:06:02.904 "tpoint_mask": "0xffffffffffffffff" 00:06:02.904 }, 00:06:02.904 "nvmf_rdma": { 00:06:02.904 "mask": "0x10", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "nvmf_tcp": { 00:06:02.904 "mask": "0x20", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "ftl": { 00:06:02.904 "mask": "0x40", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "blobfs": { 00:06:02.904 "mask": "0x80", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "dsa": { 00:06:02.904 "mask": "0x200", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "thread": { 00:06:02.904 "mask": "0x400", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "nvme_pcie": { 00:06:02.904 "mask": "0x800", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "iaa": { 00:06:02.904 "mask": "0x1000", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "nvme_tcp": { 00:06:02.904 "mask": "0x2000", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "bdev_nvme": { 00:06:02.904 "mask": "0x4000", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 }, 00:06:02.904 "sock": { 00:06:02.904 "mask": "0x8000", 00:06:02.904 "tpoint_mask": "0x0" 00:06:02.904 } 00:06:02.904 }' 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:02.904 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.905 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.905 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:03.165 00:06:03.165 real 0m0.246s 00:06:03.165 user 0m0.213s 00:06:03.165 sys 0m0.025s 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.165 16:14:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.165 ************************************ 00:06:03.165 END TEST rpc_trace_cmd_test 00:06:03.165 ************************************ 00:06:03.165 16:14:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:03.165 16:14:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:03.165 16:14:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:03.165 16:14:29 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.165 16:14:29 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.165 16:14:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.165 ************************************ 00:06:03.165 START TEST rpc_daemon_integrity 00:06:03.165 ************************************ 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.165 16:14:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:03.165 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.427 { 00:06:03.427 "name": "Malloc2", 00:06:03.427 "aliases": [ 00:06:03.427 "ff4e8239-8c6e-45c7-9102-c6e69fcbf55f" 00:06:03.427 ], 00:06:03.427 "product_name": "Malloc disk", 00:06:03.427 "block_size": 512, 00:06:03.427 "num_blocks": 16384, 00:06:03.427 "uuid": "ff4e8239-8c6e-45c7-9102-c6e69fcbf55f", 00:06:03.427 "assigned_rate_limits": { 00:06:03.427 "rw_ios_per_sec": 0, 00:06:03.427 "rw_mbytes_per_sec": 0, 00:06:03.427 "r_mbytes_per_sec": 0, 00:06:03.427 "w_mbytes_per_sec": 0 00:06:03.427 }, 00:06:03.427 "claimed": false, 00:06:03.427 "zoned": false, 00:06:03.427 "supported_io_types": { 00:06:03.427 "read": true, 00:06:03.427 "write": true, 00:06:03.427 "unmap": true, 00:06:03.427 "write_zeroes": true, 00:06:03.427 "flush": true, 00:06:03.427 "reset": true, 00:06:03.427 "compare": false, 00:06:03.427 "compare_and_write": false, 00:06:03.427 "abort": true, 00:06:03.427 "nvme_admin": false, 00:06:03.427 "nvme_io": false 00:06:03.427 }, 00:06:03.427 "memory_domains": [ 00:06:03.427 { 00:06:03.427 "dma_device_id": "system", 00:06:03.427 "dma_device_type": 1 00:06:03.427 }, 00:06:03.427 { 00:06:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.427 "dma_device_type": 2 00:06:03.427 } 00:06:03.427 ], 00:06:03.427 "driver_specific": {} 00:06:03.427 } 00:06:03.427 ]' 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.427 [2024-06-07 16:14:30.098057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:03.427 [2024-06-07 16:14:30.098087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.427 [2024-06-07 16:14:30.098101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c854b0 00:06:03.427 [2024-06-07 16:14:30.098109] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.427 [2024-06-07 16:14:30.099323] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.427 [2024-06-07 16:14:30.099342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.427 Passthru0 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.427 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.427 { 00:06:03.427 "name": "Malloc2", 00:06:03.427 "aliases": [ 00:06:03.427 "ff4e8239-8c6e-45c7-9102-c6e69fcbf55f" 00:06:03.427 ], 00:06:03.427 "product_name": "Malloc disk", 00:06:03.427 "block_size": 512, 00:06:03.427 "num_blocks": 16384, 00:06:03.427 "uuid": "ff4e8239-8c6e-45c7-9102-c6e69fcbf55f", 00:06:03.427 "assigned_rate_limits": { 00:06:03.427 "rw_ios_per_sec": 0, 00:06:03.427 "rw_mbytes_per_sec": 0, 00:06:03.427 "r_mbytes_per_sec": 0, 00:06:03.427 "w_mbytes_per_sec": 0 00:06:03.427 }, 00:06:03.427 "claimed": true, 00:06:03.427 "claim_type": "exclusive_write", 00:06:03.427 "zoned": false, 00:06:03.427 "supported_io_types": { 00:06:03.427 "read": true, 00:06:03.427 "write": true, 00:06:03.427 "unmap": true, 00:06:03.427 "write_zeroes": true, 00:06:03.427 "flush": true, 00:06:03.427 "reset": true, 00:06:03.427 "compare": false, 00:06:03.427 "compare_and_write": false, 00:06:03.427 "abort": true, 00:06:03.427 "nvme_admin": false, 00:06:03.427 "nvme_io": false 00:06:03.427 }, 00:06:03.427 "memory_domains": [ 00:06:03.427 { 00:06:03.427 "dma_device_id": "system", 00:06:03.427 "dma_device_type": 1 00:06:03.427 }, 00:06:03.427 { 00:06:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.427 "dma_device_type": 2 00:06:03.427 } 00:06:03.427 ], 00:06:03.427 "driver_specific": {} 00:06:03.427 }, 00:06:03.427 { 00:06:03.427 "name": "Passthru0", 00:06:03.427 "aliases": [ 00:06:03.427 "3fc15729-a378-5e7d-868a-2abc007aeff6" 00:06:03.427 ], 00:06:03.427 "product_name": "passthru", 00:06:03.427 "block_size": 512, 00:06:03.427 "num_blocks": 16384, 00:06:03.427 "uuid": "3fc15729-a378-5e7d-868a-2abc007aeff6", 00:06:03.427 "assigned_rate_limits": { 00:06:03.427 "rw_ios_per_sec": 0, 00:06:03.427 "rw_mbytes_per_sec": 0, 00:06:03.427 "r_mbytes_per_sec": 0, 00:06:03.427 "w_mbytes_per_sec": 0 00:06:03.427 }, 00:06:03.427 "claimed": false, 00:06:03.427 "zoned": false, 00:06:03.427 "supported_io_types": { 00:06:03.427 "read": true, 00:06:03.427 "write": true, 00:06:03.427 "unmap": true, 00:06:03.427 "write_zeroes": true, 00:06:03.427 "flush": true, 00:06:03.427 "reset": true, 00:06:03.427 "compare": false, 00:06:03.427 "compare_and_write": false, 00:06:03.427 "abort": true, 00:06:03.428 "nvme_admin": false, 00:06:03.428 "nvme_io": false 00:06:03.428 }, 00:06:03.428 "memory_domains": [ 00:06:03.428 { 00:06:03.428 "dma_device_id": "system", 00:06:03.428 "dma_device_type": 1 00:06:03.428 }, 00:06:03.428 { 00:06:03.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.428 "dma_device_type": 2 00:06:03.428 } 00:06:03.428 ], 00:06:03.428 "driver_specific": { 00:06:03.428 "passthru": { 00:06:03.428 "name": "Passthru0", 00:06:03.428 "base_bdev_name": "Malloc2" 00:06:03.428 } 00:06:03.428 } 00:06:03.428 } 00:06:03.428 ]' 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:03.428 00:06:03.428 real 0m0.292s 00:06:03.428 user 0m0.190s 00:06:03.428 sys 0m0.037s 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.428 16:14:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.428 ************************************ 00:06:03.428 END TEST rpc_daemon_integrity 00:06:03.428 ************************************ 00:06:03.689 16:14:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:03.689 16:14:30 rpc -- rpc/rpc.sh@84 -- # killprocess 2881340 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@949 -- # '[' -z 2881340 ']' 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@953 -- # kill -0 2881340 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@954 -- # uname 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2881340 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2881340' 00:06:03.689 killing process with pid 2881340 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@968 -- # kill 2881340 00:06:03.689 16:14:30 rpc -- common/autotest_common.sh@973 -- # wait 2881340 00:06:03.949 00:06:03.949 real 0m2.419s 00:06:03.949 user 0m3.202s 00:06:03.949 sys 0m0.656s 00:06:03.949 16:14:30 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.949 16:14:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.949 ************************************ 00:06:03.949 END TEST rpc 00:06:03.949 ************************************ 00:06:03.949 16:14:30 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:03.949 16:14:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.949 16:14:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.949 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:03.949 ************************************ 00:06:03.949 START TEST skip_rpc 00:06:03.949 ************************************ 00:06:03.949 16:14:30 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:03.949 * Looking for test storage... 00:06:03.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:03.949 16:14:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.949 16:14:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:03.949 16:14:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:03.949 16:14:30 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.949 16:14:30 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.949 16:14:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.949 ************************************ 00:06:03.949 START TEST skip_rpc 00:06:03.949 ************************************ 00:06:03.949 16:14:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:06:03.949 16:14:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2881918 00:06:03.949 16:14:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.949 16:14:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:03.949 16:14:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:04.211 [2024-06-07 16:14:30.811446] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:04.211 [2024-06-07 16:14:30.811498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881918 ] 00:06:04.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.211 [2024-06-07 16:14:30.873789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.211 [2024-06-07 16:14:30.947714] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2881918 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 2881918 ']' 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 2881918 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2881918 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2881918' 00:06:09.499 killing process with pid 2881918 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 2881918 00:06:09.499 16:14:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 2881918 00:06:09.499 00:06:09.499 real 0m5.277s 00:06:09.499 user 0m5.093s 00:06:09.499 sys 0m0.218s 00:06:09.499 16:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.499 16:14:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.499 ************************************ 00:06:09.499 END TEST skip_rpc 00:06:09.499 ************************************ 00:06:09.499 16:14:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:09.499 16:14:36 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:09.499 16:14:36 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.499 16:14:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.499 ************************************ 00:06:09.499 START TEST skip_rpc_with_json 00:06:09.499 ************************************ 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2882958 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2882958 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 2882958 ']' 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.499 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.499 [2024-06-07 16:14:36.168795] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:09.499 [2024-06-07 16:14:36.168846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882958 ] 00:06:09.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.500 [2024-06-07 16:14:36.227616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.500 [2024-06-07 16:14:36.291667] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 [2024-06-07 16:14:36.938054] nvmf_rpc.c:2560:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:10.445 request: 00:06:10.445 { 00:06:10.445 "trtype": "tcp", 00:06:10.445 "method": "nvmf_get_transports", 00:06:10.445 "req_id": 1 00:06:10.445 } 00:06:10.445 Got JSON-RPC error response 00:06:10.445 response: 00:06:10.445 { 00:06:10.445 "code": -19, 00:06:10.445 "message": "No such device" 00:06:10.445 } 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 [2024-06-07 16:14:36.950169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:10.445 16:14:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:10.445 16:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.445 { 00:06:10.445 "subsystems": [ 00:06:10.445 { 00:06:10.445 "subsystem": "vfio_user_target", 00:06:10.445 "config": null 00:06:10.445 }, 00:06:10.445 { 00:06:10.445 "subsystem": "keyring", 00:06:10.445 "config": [] 00:06:10.445 }, 00:06:10.445 { 00:06:10.445 "subsystem": "iobuf", 00:06:10.445 "config": [ 00:06:10.445 { 00:06:10.445 "method": "iobuf_set_options", 00:06:10.445 "params": { 00:06:10.445 "small_pool_count": 8192, 00:06:10.445 "large_pool_count": 1024, 00:06:10.445 "small_bufsize": 8192, 00:06:10.445 "large_bufsize": 135168 00:06:10.445 } 00:06:10.445 } 00:06:10.445 ] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "sock", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "method": "sock_set_default_impl", 00:06:10.446 "params": { 00:06:10.446 "impl_name": "posix" 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "sock_impl_set_options", 00:06:10.446 "params": { 00:06:10.446 "impl_name": "ssl", 00:06:10.446 "recv_buf_size": 4096, 00:06:10.446 "send_buf_size": 4096, 00:06:10.446 "enable_recv_pipe": true, 00:06:10.446 "enable_quickack": false, 00:06:10.446 "enable_placement_id": 0, 00:06:10.446 "enable_zerocopy_send_server": true, 00:06:10.446 "enable_zerocopy_send_client": false, 00:06:10.446 "zerocopy_threshold": 0, 00:06:10.446 "tls_version": 0, 00:06:10.446 "enable_ktls": false, 00:06:10.446 "enable_new_session_tickets": true 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "sock_impl_set_options", 00:06:10.446 "params": { 00:06:10.446 "impl_name": "posix", 00:06:10.446 "recv_buf_size": 2097152, 00:06:10.446 "send_buf_size": 2097152, 00:06:10.446 "enable_recv_pipe": true, 00:06:10.446 "enable_quickack": false, 00:06:10.446 "enable_placement_id": 0, 00:06:10.446 "enable_zerocopy_send_server": true, 00:06:10.446 "enable_zerocopy_send_client": false, 00:06:10.446 "zerocopy_threshold": 0, 00:06:10.446 "tls_version": 0, 00:06:10.446 "enable_ktls": false, 00:06:10.446 "enable_new_session_tickets": false 00:06:10.446 } 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "vmd", 00:06:10.446 "config": [] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "accel", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "method": "accel_set_options", 00:06:10.446 "params": { 00:06:10.446 "small_cache_size": 128, 00:06:10.446 "large_cache_size": 16, 00:06:10.446 "task_count": 2048, 00:06:10.446 "sequence_count": 2048, 00:06:10.446 "buf_count": 2048 00:06:10.446 } 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "bdev", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "method": "bdev_set_options", 00:06:10.446 "params": { 00:06:10.446 "bdev_io_pool_size": 65535, 00:06:10.446 "bdev_io_cache_size": 256, 00:06:10.446 "bdev_auto_examine": true, 00:06:10.446 "iobuf_small_cache_size": 128, 00:06:10.446 "iobuf_large_cache_size": 16 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "bdev_raid_set_options", 00:06:10.446 "params": { 00:06:10.446 "process_window_size_kb": 1024 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "bdev_iscsi_set_options", 00:06:10.446 "params": { 00:06:10.446 "timeout_sec": 30 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "bdev_nvme_set_options", 00:06:10.446 "params": { 00:06:10.446 "action_on_timeout": "none", 00:06:10.446 "timeout_us": 0, 00:06:10.446 "timeout_admin_us": 0, 00:06:10.446 "keep_alive_timeout_ms": 10000, 00:06:10.446 "arbitration_burst": 0, 00:06:10.446 "low_priority_weight": 0, 00:06:10.446 "medium_priority_weight": 0, 00:06:10.446 "high_priority_weight": 0, 00:06:10.446 "nvme_adminq_poll_period_us": 10000, 00:06:10.446 "nvme_ioq_poll_period_us": 0, 00:06:10.446 "io_queue_requests": 0, 00:06:10.446 "delay_cmd_submit": true, 00:06:10.446 "transport_retry_count": 4, 00:06:10.446 "bdev_retry_count": 3, 00:06:10.446 "transport_ack_timeout": 0, 00:06:10.446 "ctrlr_loss_timeout_sec": 0, 00:06:10.446 "reconnect_delay_sec": 0, 00:06:10.446 "fast_io_fail_timeout_sec": 0, 00:06:10.446 "disable_auto_failback": false, 00:06:10.446 "generate_uuids": false, 00:06:10.446 "transport_tos": 0, 00:06:10.446 "nvme_error_stat": false, 00:06:10.446 "rdma_srq_size": 0, 00:06:10.446 "io_path_stat": false, 00:06:10.446 "allow_accel_sequence": false, 00:06:10.446 "rdma_max_cq_size": 0, 00:06:10.446 "rdma_cm_event_timeout_ms": 0, 00:06:10.446 "dhchap_digests": [ 00:06:10.446 "sha256", 00:06:10.446 "sha384", 00:06:10.446 "sha512" 00:06:10.446 ], 00:06:10.446 "dhchap_dhgroups": [ 00:06:10.446 "null", 00:06:10.446 "ffdhe2048", 00:06:10.446 "ffdhe3072", 00:06:10.446 "ffdhe4096", 00:06:10.446 "ffdhe6144", 00:06:10.446 "ffdhe8192" 00:06:10.446 ] 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "bdev_nvme_set_hotplug", 00:06:10.446 "params": { 00:06:10.446 "period_us": 100000, 00:06:10.446 "enable": false 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "bdev_wait_for_examine" 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "scsi", 00:06:10.446 "config": null 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "scheduler", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "method": "framework_set_scheduler", 00:06:10.446 "params": { 00:06:10.446 "name": "static" 00:06:10.446 } 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "vhost_scsi", 00:06:10.446 "config": [] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "vhost_blk", 00:06:10.446 "config": [] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "ublk", 00:06:10.446 "config": [] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "nbd", 00:06:10.446 "config": [] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "nvmf", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "method": "nvmf_set_config", 00:06:10.446 "params": { 00:06:10.446 "discovery_filter": "match_any", 00:06:10.446 "admin_cmd_passthru": { 00:06:10.446 "identify_ctrlr": false 00:06:10.446 } 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "nvmf_set_max_subsystems", 00:06:10.446 "params": { 00:06:10.446 "max_subsystems": 1024 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "nvmf_set_crdt", 00:06:10.446 "params": { 00:06:10.446 "crdt1": 0, 00:06:10.446 "crdt2": 0, 00:06:10.446 "crdt3": 0 00:06:10.446 } 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "method": "nvmf_create_transport", 00:06:10.446 "params": { 00:06:10.446 "trtype": "TCP", 00:06:10.446 "max_queue_depth": 128, 00:06:10.446 "max_io_qpairs_per_ctrlr": 127, 00:06:10.446 "in_capsule_data_size": 4096, 00:06:10.446 "max_io_size": 131072, 00:06:10.446 "io_unit_size": 131072, 00:06:10.446 "max_aq_depth": 128, 00:06:10.446 "num_shared_buffers": 511, 00:06:10.446 "buf_cache_size": 4294967295, 00:06:10.446 "dif_insert_or_strip": false, 00:06:10.446 "zcopy": false, 00:06:10.446 "c2h_success": true, 00:06:10.446 "sock_priority": 0, 00:06:10.446 "abort_timeout_sec": 1, 00:06:10.446 "ack_timeout": 0, 00:06:10.446 "data_wr_pool_size": 0 00:06:10.446 } 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 }, 00:06:10.446 { 00:06:10.446 "subsystem": "iscsi", 00:06:10.446 "config": [ 00:06:10.446 { 00:06:10.446 "method": "iscsi_set_options", 00:06:10.446 "params": { 00:06:10.446 "node_base": "iqn.2016-06.io.spdk", 00:06:10.446 "max_sessions": 128, 00:06:10.446 "max_connections_per_session": 2, 00:06:10.446 "max_queue_depth": 64, 00:06:10.446 "default_time2wait": 2, 00:06:10.446 "default_time2retain": 20, 00:06:10.446 "first_burst_length": 8192, 00:06:10.446 "immediate_data": true, 00:06:10.446 "allow_duplicated_isid": false, 00:06:10.446 "error_recovery_level": 0, 00:06:10.446 "nop_timeout": 60, 00:06:10.446 "nop_in_interval": 30, 00:06:10.446 "disable_chap": false, 00:06:10.446 "require_chap": false, 00:06:10.446 "mutual_chap": false, 00:06:10.446 "chap_group": 0, 00:06:10.446 "max_large_datain_per_connection": 64, 00:06:10.446 "max_r2t_per_connection": 4, 00:06:10.446 "pdu_pool_size": 36864, 00:06:10.446 "immediate_data_pool_size": 16384, 00:06:10.446 "data_out_pool_size": 2048 00:06:10.446 } 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 } 00:06:10.446 ] 00:06:10.446 } 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2882958 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2882958 ']' 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2882958 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2882958 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2882958' 00:06:10.446 killing process with pid 2882958 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2882958 00:06:10.446 16:14:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2882958 00:06:10.708 16:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2883292 00:06:10.708 16:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:10.708 16:14:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2883292 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2883292 ']' 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2883292 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2883292 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2883292' 00:06:15.999 killing process with pid 2883292 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2883292 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2883292 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:15.999 00:06:15.999 real 0m6.552s 00:06:15.999 user 0m6.448s 00:06:15.999 sys 0m0.518s 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.999 ************************************ 00:06:15.999 END TEST skip_rpc_with_json 00:06:15.999 ************************************ 00:06:15.999 16:14:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:15.999 16:14:42 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.999 16:14:42 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.999 16:14:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.999 ************************************ 00:06:15.999 START TEST skip_rpc_with_delay 00:06:15.999 ************************************ 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.999 [2024-06-07 16:14:42.802422] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:15.999 [2024-06-07 16:14:42.802509] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.999 00:06:15.999 real 0m0.080s 00:06:15.999 user 0m0.052s 00:06:15.999 sys 0m0.028s 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.999 16:14:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:15.999 ************************************ 00:06:15.999 END TEST skip_rpc_with_delay 00:06:15.999 ************************************ 00:06:16.260 16:14:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:16.260 16:14:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:16.260 16:14:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:16.260 16:14:42 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:16.260 16:14:42 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.260 16:14:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.260 ************************************ 00:06:16.260 START TEST exit_on_failed_rpc_init 00:06:16.260 ************************************ 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2884386 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2884386 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 2884386 ']' 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:16.260 16:14:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.260 [2024-06-07 16:14:42.959254] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:16.260 [2024-06-07 16:14:42.959321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884386 ] 00:06:16.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.260 [2024-06-07 16:14:43.024276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.260 [2024-06-07 16:14:43.101091] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.206 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:17.206 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:06:17.206 16:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.206 16:14:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.206 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:06:17.206 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:17.207 16:14:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.207 [2024-06-07 16:14:43.810824] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:17.207 [2024-06-07 16:14:43.810874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884688 ] 00:06:17.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.207 [2024-06-07 16:14:43.887439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.207 [2024-06-07 16:14:43.951395] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.207 [2024-06-07 16:14:43.951461] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:17.207 [2024-06-07 16:14:43.951471] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:17.207 [2024-06-07 16:14:43.951478] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2884386 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 2884386 ']' 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 2884386 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:17.207 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2884386 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2884386' 00:06:17.467 killing process with pid 2884386 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 2884386 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 2884386 00:06:17.467 00:06:17.467 real 0m1.373s 00:06:17.467 user 0m1.620s 00:06:17.467 sys 0m0.377s 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.467 16:14:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:17.467 ************************************ 00:06:17.467 END TEST exit_on_failed_rpc_init 00:06:17.467 ************************************ 00:06:17.467 16:14:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.467 00:06:17.467 real 0m13.691s 00:06:17.467 user 0m13.361s 00:06:17.467 sys 0m1.425s 00:06:17.467 16:14:44 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.467 16:14:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.467 ************************************ 00:06:17.467 END TEST skip_rpc 00:06:17.467 ************************************ 00:06:17.729 16:14:44 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:17.729 16:14:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.729 16:14:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.729 16:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:17.729 ************************************ 00:06:17.729 START TEST rpc_client 00:06:17.729 ************************************ 00:06:17.729 16:14:44 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:17.729 * Looking for test storage... 00:06:17.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:17.729 16:14:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:17.729 OK 00:06:17.729 16:14:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:17.729 00:06:17.729 real 0m0.129s 00:06:17.729 user 0m0.063s 00:06:17.729 sys 0m0.074s 00:06:17.729 16:14:44 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.729 16:14:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:17.729 ************************************ 00:06:17.729 END TEST rpc_client 00:06:17.729 ************************************ 00:06:17.729 16:14:44 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:17.729 16:14:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.729 16:14:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.729 16:14:44 -- common/autotest_common.sh@10 -- # set +x 00:06:17.991 ************************************ 00:06:17.991 START TEST json_config 00:06:17.991 ************************************ 00:06:17.991 16:14:44 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.991 16:14:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.991 16:14:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.991 16:14:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.991 16:14:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.991 16:14:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.991 16:14:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.991 16:14:44 json_config -- paths/export.sh@5 -- # export PATH 00:06:17.991 16:14:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@47 -- # : 0 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.991 16:14:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:17.991 INFO: JSON configuration test init 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:17.991 16:14:44 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:17.991 16:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:17.991 16:14:44 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:17.991 16:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.991 16:14:44 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:17.991 16:14:44 json_config -- json_config/common.sh@9 -- # local app=target 00:06:17.991 16:14:44 json_config -- json_config/common.sh@10 -- # shift 00:06:17.991 16:14:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.991 16:14:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.992 16:14:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.992 16:14:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.992 16:14:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.992 16:14:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2884888 00:06:17.992 16:14:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.992 Waiting for target to run... 00:06:17.992 16:14:44 json_config -- json_config/common.sh@25 -- # waitforlisten 2884888 /var/tmp/spdk_tgt.sock 00:06:17.992 16:14:44 json_config -- common/autotest_common.sh@830 -- # '[' -z 2884888 ']' 00:06:17.992 16:14:44 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.992 16:14:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:17.992 16:14:44 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.992 16:14:44 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.992 16:14:44 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.992 16:14:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.992 [2024-06-07 16:14:44.780830] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:17.992 [2024-06-07 16:14:44.780900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884888 ] 00:06:17.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.252 [2024-06-07 16:14:45.047192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.252 [2024-06-07 16:14:45.097274] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.823 16:14:45 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.823 16:14:45 json_config -- common/autotest_common.sh@863 -- # return 0 00:06:18.823 16:14:45 json_config -- json_config/common.sh@26 -- # echo '' 00:06:18.823 00:06:18.823 16:14:45 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:18.823 16:14:45 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:18.823 16:14:45 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:18.823 16:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.823 16:14:45 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:18.823 16:14:45 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:18.823 16:14:45 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:18.823 16:14:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.823 16:14:45 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:18.823 16:14:45 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:18.823 16:14:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:19.395 16:14:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:19.395 16:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:19.395 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:19.395 16:14:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:19.658 16:14:46 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:19.658 16:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:19.658 16:14:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:19.658 16:14:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:19.658 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:19.658 MallocForNvmf0 00:06:19.658 16:14:46 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.658 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.918 MallocForNvmf1 00:06:19.918 16:14:46 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:19.918 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:20.179 [2024-06-07 16:14:46.799107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.179 16:14:46 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.179 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.179 16:14:46 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:20.179 16:14:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:20.439 16:14:47 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.439 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.699 16:14:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.699 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.699 [2024-06-07 16:14:47.437141] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:20.699 16:14:47 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:20.699 16:14:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:20.699 16:14:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.699 16:14:47 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:20.699 16:14:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:20.699 16:14:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.699 16:14:47 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:20.699 16:14:47 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:20.699 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:20.972 MallocBdevForConfigChangeCheck 00:06:20.972 16:14:47 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:20.972 16:14:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:20.972 16:14:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.972 16:14:47 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:20.972 16:14:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.262 16:14:48 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:21.262 INFO: shutting down applications... 00:06:21.262 16:14:48 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:21.262 16:14:48 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:21.262 16:14:48 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:21.262 16:14:48 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.833 Calling clear_iscsi_subsystem 00:06:21.833 Calling clear_nvmf_subsystem 00:06:21.833 Calling clear_nbd_subsystem 00:06:21.833 Calling clear_ublk_subsystem 00:06:21.833 Calling clear_vhost_blk_subsystem 00:06:21.833 Calling clear_vhost_scsi_subsystem 00:06:21.833 Calling clear_bdev_subsystem 00:06:21.833 16:14:48 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:21.833 16:14:48 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:21.833 16:14:48 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:21.833 16:14:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.833 16:14:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.833 16:14:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:22.093 16:14:48 json_config -- json_config/json_config.sh@345 -- # break 00:06:22.093 16:14:48 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:22.093 16:14:48 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:22.093 16:14:48 json_config -- json_config/common.sh@31 -- # local app=target 00:06:22.093 16:14:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.093 16:14:48 json_config -- json_config/common.sh@35 -- # [[ -n 2884888 ]] 00:06:22.093 16:14:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2884888 00:06:22.093 16:14:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.093 16:14:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.093 16:14:48 json_config -- json_config/common.sh@41 -- # kill -0 2884888 00:06:22.093 16:14:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.666 16:14:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.666 16:14:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.666 16:14:49 json_config -- json_config/common.sh@41 -- # kill -0 2884888 00:06:22.666 16:14:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.666 16:14:49 json_config -- json_config/common.sh@43 -- # break 00:06:22.666 16:14:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.667 16:14:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.667 SPDK target shutdown done 00:06:22.667 16:14:49 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:22.667 INFO: relaunching applications... 00:06:22.667 16:14:49 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.667 16:14:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:22.667 16:14:49 json_config -- json_config/common.sh@10 -- # shift 00:06:22.667 16:14:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.667 16:14:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.667 16:14:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.667 16:14:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.667 16:14:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.667 16:14:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2885941 00:06:22.667 16:14:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.667 Waiting for target to run... 00:06:22.667 16:14:49 json_config -- json_config/common.sh@25 -- # waitforlisten 2885941 /var/tmp/spdk_tgt.sock 00:06:22.667 16:14:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.667 16:14:49 json_config -- common/autotest_common.sh@830 -- # '[' -z 2885941 ']' 00:06:22.667 16:14:49 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.667 16:14:49 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:22.667 16:14:49 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.667 16:14:49 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:22.667 16:14:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.667 [2024-06-07 16:14:49.324997] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:22.667 [2024-06-07 16:14:49.325054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885941 ] 00:06:22.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.927 [2024-06-07 16:14:49.607111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.927 [2024-06-07 16:14:49.661116] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.499 [2024-06-07 16:14:50.153657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.499 [2024-06-07 16:14:50.186023] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.499 16:14:50 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:23.499 16:14:50 json_config -- common/autotest_common.sh@863 -- # return 0 00:06:23.499 16:14:50 json_config -- json_config/common.sh@26 -- # echo '' 00:06:23.499 00:06:23.499 16:14:50 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:23.499 16:14:50 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:23.499 INFO: Checking if target configuration is the same... 00:06:23.499 16:14:50 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.499 16:14:50 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:23.499 16:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.499 + '[' 2 -ne 2 ']' 00:06:23.499 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:23.499 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:23.499 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:23.499 +++ basename /dev/fd/62 00:06:23.499 ++ mktemp /tmp/62.XXX 00:06:23.499 + tmp_file_1=/tmp/62.HaK 00:06:23.499 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.499 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.499 + tmp_file_2=/tmp/spdk_tgt_config.json.QAF 00:06:23.499 + ret=0 00:06:23.499 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.759 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.760 + diff -u /tmp/62.HaK /tmp/spdk_tgt_config.json.QAF 00:06:23.760 + echo 'INFO: JSON config files are the same' 00:06:23.760 INFO: JSON config files are the same 00:06:23.760 + rm /tmp/62.HaK /tmp/spdk_tgt_config.json.QAF 00:06:23.760 + exit 0 00:06:23.760 16:14:50 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:23.760 16:14:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:23.760 INFO: changing configuration and checking if this can be detected... 00:06:23.760 16:14:50 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.760 16:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.020 16:14:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:24.020 16:14:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.020 16:14:50 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.020 + '[' 2 -ne 2 ']' 00:06:24.020 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.020 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.020 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.020 +++ basename /dev/fd/62 00:06:24.020 ++ mktemp /tmp/62.XXX 00:06:24.020 + tmp_file_1=/tmp/62.FDf 00:06:24.020 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.020 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.020 + tmp_file_2=/tmp/spdk_tgt_config.json.nMC 00:06:24.020 + ret=0 00:06:24.020 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.280 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.280 + diff -u /tmp/62.FDf /tmp/spdk_tgt_config.json.nMC 00:06:24.280 + ret=1 00:06:24.280 + echo '=== Start of file: /tmp/62.FDf ===' 00:06:24.280 + cat /tmp/62.FDf 00:06:24.280 + echo '=== End of file: /tmp/62.FDf ===' 00:06:24.280 + echo '' 00:06:24.280 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nMC ===' 00:06:24.280 + cat /tmp/spdk_tgt_config.json.nMC 00:06:24.280 + echo '=== End of file: /tmp/spdk_tgt_config.json.nMC ===' 00:06:24.280 + echo '' 00:06:24.280 + rm /tmp/62.FDf /tmp/spdk_tgt_config.json.nMC 00:06:24.280 + exit 1 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:24.280 INFO: configuration change detected. 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:24.280 16:14:51 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:24.280 16:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@317 -- # [[ -n 2885941 ]] 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:24.280 16:14:51 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:24.280 16:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:24.280 16:14:51 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:24.280 16:14:51 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:24.280 16:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.540 16:14:51 json_config -- json_config/json_config.sh@323 -- # killprocess 2885941 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@949 -- # '[' -z 2885941 ']' 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@953 -- # kill -0 2885941 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@954 -- # uname 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2885941 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2885941' 00:06:24.540 killing process with pid 2885941 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@968 -- # kill 2885941 00:06:24.540 16:14:51 json_config -- common/autotest_common.sh@973 -- # wait 2885941 00:06:24.801 16:14:51 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.801 16:14:51 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:24.801 16:14:51 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:24.801 16:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.801 16:14:51 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:24.801 16:14:51 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:24.801 INFO: Success 00:06:24.801 00:06:24.801 real 0m6.941s 00:06:24.801 user 0m8.440s 00:06:24.801 sys 0m1.672s 00:06:24.801 16:14:51 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.801 16:14:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.801 ************************************ 00:06:24.801 END TEST json_config 00:06:24.801 ************************************ 00:06:24.801 16:14:51 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:24.801 16:14:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:24.801 16:14:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.801 16:14:51 -- common/autotest_common.sh@10 -- # set +x 00:06:24.801 ************************************ 00:06:24.801 START TEST json_config_extra_key 00:06:24.801 ************************************ 00:06:24.801 16:14:51 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.062 16:14:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.062 16:14:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.062 16:14:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.062 16:14:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.062 16:14:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.062 16:14:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.062 16:14:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.062 16:14:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.062 16:14:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.062 INFO: launching applications... 00:06:25.062 16:14:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2886614 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.062 Waiting for target to run... 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2886614 /var/tmp/spdk_tgt.sock 00:06:25.062 16:14:51 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 2886614 ']' 00:06:25.062 16:14:51 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.062 16:14:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.062 16:14:51 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:25.062 16:14:51 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.062 16:14:51 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:25.062 16:14:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.062 [2024-06-07 16:14:51.773634] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:25.062 [2024-06-07 16:14:51.773706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886614 ] 00:06:25.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.322 [2024-06-07 16:14:52.043181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.322 [2024-06-07 16:14:52.095824] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.892 16:14:52 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:25.892 16:14:52 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:06:25.892 16:14:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:25.892 00:06:25.893 16:14:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:25.893 INFO: shutting down applications... 00:06:25.893 16:14:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2886614 ]] 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2886614 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2886614 00:06:25.893 16:14:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2886614 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.465 16:14:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.465 SPDK target shutdown done 00:06:26.465 16:14:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:26.465 Success 00:06:26.465 00:06:26.465 real 0m1.433s 00:06:26.465 user 0m1.074s 00:06:26.465 sys 0m0.372s 00:06:26.465 16:14:53 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.465 16:14:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.465 ************************************ 00:06:26.465 END TEST json_config_extra_key 00:06:26.465 ************************************ 00:06:26.465 16:14:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.465 16:14:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:26.465 16:14:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.465 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:26.465 ************************************ 00:06:26.465 START TEST alias_rpc 00:06:26.465 ************************************ 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.465 * Looking for test storage... 00:06:26.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:26.465 16:14:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.465 16:14:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2886907 00:06:26.465 16:14:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2886907 00:06:26.465 16:14:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 2886907 ']' 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:26.465 16:14:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.465 [2024-06-07 16:14:53.274782] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:26.465 [2024-06-07 16:14:53.274842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886907 ] 00:06:26.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.725 [2024-06-07 16:14:53.335289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.725 [2024-06-07 16:14:53.400074] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.294 16:14:54 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:27.294 16:14:54 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:27.294 16:14:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:27.553 16:14:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2886907 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 2886907 ']' 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 2886907 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2886907 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2886907' 00:06:27.553 killing process with pid 2886907 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@968 -- # kill 2886907 00:06:27.553 16:14:54 alias_rpc -- common/autotest_common.sh@973 -- # wait 2886907 00:06:27.814 00:06:27.814 real 0m1.346s 00:06:27.814 user 0m1.447s 00:06:27.814 sys 0m0.377s 00:06:27.814 16:14:54 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.814 16:14:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.814 ************************************ 00:06:27.814 END TEST alias_rpc 00:06:27.814 ************************************ 00:06:27.814 16:14:54 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:27.814 16:14:54 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.814 16:14:54 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:27.814 16:14:54 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.814 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:06:27.814 ************************************ 00:06:27.814 START TEST spdkcli_tcp 00:06:27.814 ************************************ 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.814 * Looking for test storage... 00:06:27.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2887181 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2887181 00:06:27.814 16:14:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 2887181 ']' 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:27.814 16:14:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.074 [2024-06-07 16:14:54.712035] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:28.074 [2024-06-07 16:14:54.712086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887181 ] 00:06:28.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.074 [2024-06-07 16:14:54.772957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.074 [2024-06-07 16:14:54.839180] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.074 [2024-06-07 16:14:54.839181] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.644 16:14:55 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:28.644 16:14:55 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:06:28.644 16:14:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2887504 00:06:28.644 16:14:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:28.644 16:14:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:28.905 [ 00:06:28.905 "bdev_malloc_delete", 00:06:28.905 "bdev_malloc_create", 00:06:28.905 "bdev_null_resize", 00:06:28.905 "bdev_null_delete", 00:06:28.905 "bdev_null_create", 00:06:28.905 "bdev_nvme_cuse_unregister", 00:06:28.905 "bdev_nvme_cuse_register", 00:06:28.905 "bdev_opal_new_user", 00:06:28.905 "bdev_opal_set_lock_state", 00:06:28.905 "bdev_opal_delete", 00:06:28.905 "bdev_opal_get_info", 00:06:28.905 "bdev_opal_create", 00:06:28.905 "bdev_nvme_opal_revert", 00:06:28.905 "bdev_nvme_opal_init", 00:06:28.905 "bdev_nvme_send_cmd", 00:06:28.905 "bdev_nvme_get_path_iostat", 00:06:28.905 "bdev_nvme_get_mdns_discovery_info", 00:06:28.905 "bdev_nvme_stop_mdns_discovery", 00:06:28.905 "bdev_nvme_start_mdns_discovery", 00:06:28.905 "bdev_nvme_set_multipath_policy", 00:06:28.905 "bdev_nvme_set_preferred_path", 00:06:28.905 "bdev_nvme_get_io_paths", 00:06:28.905 "bdev_nvme_remove_error_injection", 00:06:28.905 "bdev_nvme_add_error_injection", 00:06:28.905 "bdev_nvme_get_discovery_info", 00:06:28.905 "bdev_nvme_stop_discovery", 00:06:28.905 "bdev_nvme_start_discovery", 00:06:28.905 "bdev_nvme_get_controller_health_info", 00:06:28.905 "bdev_nvme_disable_controller", 00:06:28.905 "bdev_nvme_enable_controller", 00:06:28.905 "bdev_nvme_reset_controller", 00:06:28.905 "bdev_nvme_get_transport_statistics", 00:06:28.905 "bdev_nvme_apply_firmware", 00:06:28.905 "bdev_nvme_detach_controller", 00:06:28.905 "bdev_nvme_get_controllers", 00:06:28.905 "bdev_nvme_attach_controller", 00:06:28.905 "bdev_nvme_set_hotplug", 00:06:28.905 "bdev_nvme_set_options", 00:06:28.905 "bdev_passthru_delete", 00:06:28.905 "bdev_passthru_create", 00:06:28.905 "bdev_lvol_set_parent_bdev", 00:06:28.905 "bdev_lvol_set_parent", 00:06:28.905 "bdev_lvol_check_shallow_copy", 00:06:28.905 "bdev_lvol_start_shallow_copy", 00:06:28.905 "bdev_lvol_grow_lvstore", 00:06:28.905 "bdev_lvol_get_lvols", 00:06:28.905 "bdev_lvol_get_lvstores", 00:06:28.905 "bdev_lvol_delete", 00:06:28.905 "bdev_lvol_set_read_only", 00:06:28.905 "bdev_lvol_resize", 00:06:28.905 "bdev_lvol_decouple_parent", 00:06:28.905 "bdev_lvol_inflate", 00:06:28.905 "bdev_lvol_rename", 00:06:28.905 "bdev_lvol_clone_bdev", 00:06:28.905 "bdev_lvol_clone", 00:06:28.905 "bdev_lvol_snapshot", 00:06:28.905 "bdev_lvol_create", 00:06:28.905 "bdev_lvol_delete_lvstore", 00:06:28.905 "bdev_lvol_rename_lvstore", 00:06:28.905 "bdev_lvol_create_lvstore", 00:06:28.905 "bdev_raid_set_options", 00:06:28.905 "bdev_raid_remove_base_bdev", 00:06:28.905 "bdev_raid_add_base_bdev", 00:06:28.905 "bdev_raid_delete", 00:06:28.905 "bdev_raid_create", 00:06:28.905 "bdev_raid_get_bdevs", 00:06:28.905 "bdev_error_inject_error", 00:06:28.905 "bdev_error_delete", 00:06:28.905 "bdev_error_create", 00:06:28.905 "bdev_split_delete", 00:06:28.905 "bdev_split_create", 00:06:28.905 "bdev_delay_delete", 00:06:28.905 "bdev_delay_create", 00:06:28.905 "bdev_delay_update_latency", 00:06:28.905 "bdev_zone_block_delete", 00:06:28.905 "bdev_zone_block_create", 00:06:28.905 "blobfs_create", 00:06:28.905 "blobfs_detect", 00:06:28.905 "blobfs_set_cache_size", 00:06:28.905 "bdev_aio_delete", 00:06:28.905 "bdev_aio_rescan", 00:06:28.905 "bdev_aio_create", 00:06:28.905 "bdev_ftl_set_property", 00:06:28.905 "bdev_ftl_get_properties", 00:06:28.905 "bdev_ftl_get_stats", 00:06:28.905 "bdev_ftl_unmap", 00:06:28.905 "bdev_ftl_unload", 00:06:28.905 "bdev_ftl_delete", 00:06:28.905 "bdev_ftl_load", 00:06:28.905 "bdev_ftl_create", 00:06:28.905 "bdev_virtio_attach_controller", 00:06:28.905 "bdev_virtio_scsi_get_devices", 00:06:28.905 "bdev_virtio_detach_controller", 00:06:28.905 "bdev_virtio_blk_set_hotplug", 00:06:28.905 "bdev_iscsi_delete", 00:06:28.905 "bdev_iscsi_create", 00:06:28.905 "bdev_iscsi_set_options", 00:06:28.905 "accel_error_inject_error", 00:06:28.905 "ioat_scan_accel_module", 00:06:28.905 "dsa_scan_accel_module", 00:06:28.905 "iaa_scan_accel_module", 00:06:28.905 "vfu_virtio_create_scsi_endpoint", 00:06:28.905 "vfu_virtio_scsi_remove_target", 00:06:28.905 "vfu_virtio_scsi_add_target", 00:06:28.905 "vfu_virtio_create_blk_endpoint", 00:06:28.905 "vfu_virtio_delete_endpoint", 00:06:28.905 "keyring_file_remove_key", 00:06:28.905 "keyring_file_add_key", 00:06:28.905 "keyring_linux_set_options", 00:06:28.905 "iscsi_get_histogram", 00:06:28.905 "iscsi_enable_histogram", 00:06:28.905 "iscsi_set_options", 00:06:28.905 "iscsi_get_auth_groups", 00:06:28.905 "iscsi_auth_group_remove_secret", 00:06:28.905 "iscsi_auth_group_add_secret", 00:06:28.905 "iscsi_delete_auth_group", 00:06:28.905 "iscsi_create_auth_group", 00:06:28.905 "iscsi_set_discovery_auth", 00:06:28.905 "iscsi_get_options", 00:06:28.905 "iscsi_target_node_request_logout", 00:06:28.905 "iscsi_target_node_set_redirect", 00:06:28.905 "iscsi_target_node_set_auth", 00:06:28.905 "iscsi_target_node_add_lun", 00:06:28.905 "iscsi_get_stats", 00:06:28.905 "iscsi_get_connections", 00:06:28.905 "iscsi_portal_group_set_auth", 00:06:28.905 "iscsi_start_portal_group", 00:06:28.905 "iscsi_delete_portal_group", 00:06:28.905 "iscsi_create_portal_group", 00:06:28.905 "iscsi_get_portal_groups", 00:06:28.905 "iscsi_delete_target_node", 00:06:28.905 "iscsi_target_node_remove_pg_ig_maps", 00:06:28.905 "iscsi_target_node_add_pg_ig_maps", 00:06:28.905 "iscsi_create_target_node", 00:06:28.905 "iscsi_get_target_nodes", 00:06:28.905 "iscsi_delete_initiator_group", 00:06:28.905 "iscsi_initiator_group_remove_initiators", 00:06:28.905 "iscsi_initiator_group_add_initiators", 00:06:28.905 "iscsi_create_initiator_group", 00:06:28.905 "iscsi_get_initiator_groups", 00:06:28.905 "nvmf_set_crdt", 00:06:28.905 "nvmf_set_config", 00:06:28.905 "nvmf_set_max_subsystems", 00:06:28.905 "nvmf_stop_mdns_prr", 00:06:28.905 "nvmf_publish_mdns_prr", 00:06:28.905 "nvmf_subsystem_get_listeners", 00:06:28.905 "nvmf_subsystem_get_qpairs", 00:06:28.905 "nvmf_subsystem_get_controllers", 00:06:28.905 "nvmf_get_stats", 00:06:28.905 "nvmf_get_transports", 00:06:28.905 "nvmf_create_transport", 00:06:28.905 "nvmf_get_targets", 00:06:28.905 "nvmf_delete_target", 00:06:28.905 "nvmf_create_target", 00:06:28.905 "nvmf_subsystem_allow_any_host", 00:06:28.905 "nvmf_subsystem_remove_host", 00:06:28.905 "nvmf_subsystem_add_host", 00:06:28.905 "nvmf_ns_remove_host", 00:06:28.905 "nvmf_ns_add_host", 00:06:28.905 "nvmf_subsystem_remove_ns", 00:06:28.905 "nvmf_subsystem_add_ns", 00:06:28.905 "nvmf_subsystem_listener_set_ana_state", 00:06:28.905 "nvmf_discovery_get_referrals", 00:06:28.905 "nvmf_discovery_remove_referral", 00:06:28.905 "nvmf_discovery_add_referral", 00:06:28.905 "nvmf_subsystem_remove_listener", 00:06:28.905 "nvmf_subsystem_add_listener", 00:06:28.905 "nvmf_delete_subsystem", 00:06:28.905 "nvmf_create_subsystem", 00:06:28.905 "nvmf_get_subsystems", 00:06:28.905 "env_dpdk_get_mem_stats", 00:06:28.905 "nbd_get_disks", 00:06:28.905 "nbd_stop_disk", 00:06:28.905 "nbd_start_disk", 00:06:28.905 "ublk_recover_disk", 00:06:28.905 "ublk_get_disks", 00:06:28.905 "ublk_stop_disk", 00:06:28.905 "ublk_start_disk", 00:06:28.905 "ublk_destroy_target", 00:06:28.905 "ublk_create_target", 00:06:28.905 "virtio_blk_create_transport", 00:06:28.905 "virtio_blk_get_transports", 00:06:28.905 "vhost_controller_set_coalescing", 00:06:28.905 "vhost_get_controllers", 00:06:28.905 "vhost_delete_controller", 00:06:28.905 "vhost_create_blk_controller", 00:06:28.905 "vhost_scsi_controller_remove_target", 00:06:28.905 "vhost_scsi_controller_add_target", 00:06:28.905 "vhost_start_scsi_controller", 00:06:28.905 "vhost_create_scsi_controller", 00:06:28.905 "thread_set_cpumask", 00:06:28.905 "framework_get_scheduler", 00:06:28.905 "framework_set_scheduler", 00:06:28.905 "framework_get_reactors", 00:06:28.905 "thread_get_io_channels", 00:06:28.905 "thread_get_pollers", 00:06:28.905 "thread_get_stats", 00:06:28.905 "framework_monitor_context_switch", 00:06:28.905 "spdk_kill_instance", 00:06:28.905 "log_enable_timestamps", 00:06:28.905 "log_get_flags", 00:06:28.905 "log_clear_flag", 00:06:28.905 "log_set_flag", 00:06:28.905 "log_get_level", 00:06:28.905 "log_set_level", 00:06:28.905 "log_get_print_level", 00:06:28.906 "log_set_print_level", 00:06:28.906 "framework_enable_cpumask_locks", 00:06:28.906 "framework_disable_cpumask_locks", 00:06:28.906 "framework_wait_init", 00:06:28.906 "framework_start_init", 00:06:28.906 "scsi_get_devices", 00:06:28.906 "bdev_get_histogram", 00:06:28.906 "bdev_enable_histogram", 00:06:28.906 "bdev_set_qos_limit", 00:06:28.906 "bdev_set_qd_sampling_period", 00:06:28.906 "bdev_get_bdevs", 00:06:28.906 "bdev_reset_iostat", 00:06:28.906 "bdev_get_iostat", 00:06:28.906 "bdev_examine", 00:06:28.906 "bdev_wait_for_examine", 00:06:28.906 "bdev_set_options", 00:06:28.906 "notify_get_notifications", 00:06:28.906 "notify_get_types", 00:06:28.906 "accel_get_stats", 00:06:28.906 "accel_set_options", 00:06:28.906 "accel_set_driver", 00:06:28.906 "accel_crypto_key_destroy", 00:06:28.906 "accel_crypto_keys_get", 00:06:28.906 "accel_crypto_key_create", 00:06:28.906 "accel_assign_opc", 00:06:28.906 "accel_get_module_info", 00:06:28.906 "accel_get_opc_assignments", 00:06:28.906 "vmd_rescan", 00:06:28.906 "vmd_remove_device", 00:06:28.906 "vmd_enable", 00:06:28.906 "sock_get_default_impl", 00:06:28.906 "sock_set_default_impl", 00:06:28.906 "sock_impl_set_options", 00:06:28.906 "sock_impl_get_options", 00:06:28.906 "iobuf_get_stats", 00:06:28.906 "iobuf_set_options", 00:06:28.906 "keyring_get_keys", 00:06:28.906 "framework_get_pci_devices", 00:06:28.906 "framework_get_config", 00:06:28.906 "framework_get_subsystems", 00:06:28.906 "vfu_tgt_set_base_path", 00:06:28.906 "trace_get_info", 00:06:28.906 "trace_get_tpoint_group_mask", 00:06:28.906 "trace_disable_tpoint_group", 00:06:28.906 "trace_enable_tpoint_group", 00:06:28.906 "trace_clear_tpoint_mask", 00:06:28.906 "trace_set_tpoint_mask", 00:06:28.906 "spdk_get_version", 00:06:28.906 "rpc_get_methods" 00:06:28.906 ] 00:06:28.906 16:14:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.906 16:14:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:28.906 16:14:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2887181 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 2887181 ']' 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 2887181 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2887181 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2887181' 00:06:28.906 killing process with pid 2887181 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 2887181 00:06:28.906 16:14:55 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 2887181 00:06:29.166 00:06:29.166 real 0m1.391s 00:06:29.166 user 0m2.566s 00:06:29.166 sys 0m0.392s 00:06:29.166 16:14:55 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.166 16:14:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.166 ************************************ 00:06:29.166 END TEST spdkcli_tcp 00:06:29.166 ************************************ 00:06:29.166 16:14:55 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.166 16:14:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:29.166 16:14:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.166 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.166 ************************************ 00:06:29.166 START TEST dpdk_mem_utility 00:06:29.166 ************************************ 00:06:29.166 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.426 * Looking for test storage... 00:06:29.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:29.426 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.426 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2887581 00:06:29.426 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2887581 00:06:29.426 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.426 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 2887581 ']' 00:06:29.426 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.426 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:29.426 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.426 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:29.426 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.426 [2024-06-07 16:14:56.156669] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:29.426 [2024-06-07 16:14:56.156739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887581 ] 00:06:29.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.426 [2024-06-07 16:14:56.220458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.713 [2024-06-07 16:14:56.295378] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.283 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:30.283 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:06:30.283 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:30.283 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:30.283 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:30.283 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.283 { 00:06:30.283 "filename": "/tmp/spdk_mem_dump.txt" 00:06:30.283 } 00:06:30.283 16:14:56 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:30.283 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:30.283 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:30.283 1 heaps totaling size 814.000000 MiB 00:06:30.283 size: 814.000000 MiB heap id: 0 00:06:30.283 end heaps---------- 00:06:30.283 8 mempools totaling size 598.116089 MiB 00:06:30.283 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:30.283 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:30.283 size: 84.521057 MiB name: bdev_io_2887581 00:06:30.283 size: 51.011292 MiB name: evtpool_2887581 00:06:30.283 size: 50.003479 MiB name: msgpool_2887581 00:06:30.283 size: 21.763794 MiB name: PDU_Pool 00:06:30.283 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:30.283 size: 0.026123 MiB name: Session_Pool 00:06:30.283 end mempools------- 00:06:30.283 6 memzones totaling size 4.142822 MiB 00:06:30.283 size: 1.000366 MiB name: RG_ring_0_2887581 00:06:30.283 size: 1.000366 MiB name: RG_ring_1_2887581 00:06:30.283 size: 1.000366 MiB name: RG_ring_4_2887581 00:06:30.283 size: 1.000366 MiB name: RG_ring_5_2887581 00:06:30.283 size: 0.125366 MiB name: RG_ring_2_2887581 00:06:30.283 size: 0.015991 MiB name: RG_ring_3_2887581 00:06:30.283 end memzones------- 00:06:30.283 16:14:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:30.283 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:30.283 list of free elements. size: 12.519348 MiB 00:06:30.283 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:30.283 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:30.283 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:30.283 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:30.283 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:30.283 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:30.283 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:30.283 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:30.283 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:30.283 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:30.283 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:30.283 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:30.283 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:30.283 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:30.283 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:30.283 list of standard malloc elements. size: 199.218079 MiB 00:06:30.283 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:30.283 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:30.283 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:30.283 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:30.283 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:30.283 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:30.284 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:30.284 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:30.284 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:30.284 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:30.284 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:30.284 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:30.284 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:30.284 list of memzone associated elements. size: 602.262573 MiB 00:06:30.284 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:30.284 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:30.284 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:30.284 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:30.284 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:30.284 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2887581_0 00:06:30.284 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:30.284 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2887581_0 00:06:30.284 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:30.284 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2887581_0 00:06:30.284 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:30.284 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:30.284 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:30.284 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:30.284 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:30.284 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2887581 00:06:30.284 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:30.284 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2887581 00:06:30.284 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:30.284 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2887581 00:06:30.284 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:30.284 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:30.284 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:30.284 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:30.284 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:30.284 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:30.284 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:30.284 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:30.284 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:30.284 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2887581 00:06:30.284 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:30.284 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2887581 00:06:30.284 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:30.284 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2887581 00:06:30.284 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:30.284 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2887581 00:06:30.284 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:30.284 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2887581 00:06:30.284 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:30.284 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:30.284 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:30.284 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:30.284 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:30.284 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:30.284 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:30.284 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2887581 00:06:30.284 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:30.284 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:30.284 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:30.284 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:30.284 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:30.284 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2887581 00:06:30.284 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:30.284 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:30.284 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:30.284 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2887581 00:06:30.284 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:30.284 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2887581 00:06:30.284 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:30.284 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:30.284 16:14:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:30.284 16:14:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2887581 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 2887581 ']' 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 2887581 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2887581 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2887581' 00:06:30.284 killing process with pid 2887581 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 2887581 00:06:30.284 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 2887581 00:06:30.545 00:06:30.545 real 0m1.289s 00:06:30.545 user 0m1.364s 00:06:30.545 sys 0m0.378s 00:06:30.545 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.545 16:14:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 ************************************ 00:06:30.545 END TEST dpdk_mem_utility 00:06:30.545 ************************************ 00:06:30.545 16:14:57 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.545 16:14:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.545 16:14:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.545 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 ************************************ 00:06:30.545 START TEST event 00:06:30.545 ************************************ 00:06:30.545 16:14:57 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.805 * Looking for test storage... 00:06:30.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:30.805 16:14:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:30.805 16:14:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.805 16:14:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.805 16:14:57 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:30.805 16:14:57 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.805 16:14:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.805 ************************************ 00:06:30.805 START TEST event_perf 00:06:30.805 ************************************ 00:06:30.805 16:14:57 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.805 Running I/O for 1 seconds...[2024-06-07 16:14:57.520546] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:30.805 [2024-06-07 16:14:57.520614] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887970 ] 00:06:30.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.805 [2024-06-07 16:14:57.586314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.065 [2024-06-07 16:14:57.662718] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.065 [2024-06-07 16:14:57.662835] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.065 [2024-06-07 16:14:57.662998] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.065 Running I/O for 1 seconds...[2024-06-07 16:14:57.662998] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.007 00:06:32.007 lcore 0: 168101 00:06:32.007 lcore 1: 168102 00:06:32.007 lcore 2: 168102 00:06:32.007 lcore 3: 168105 00:06:32.007 done. 00:06:32.007 00:06:32.007 real 0m1.216s 00:06:32.007 user 0m4.137s 00:06:32.007 sys 0m0.075s 00:06:32.007 16:14:58 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.007 16:14:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.007 ************************************ 00:06:32.007 END TEST event_perf 00:06:32.007 ************************************ 00:06:32.007 16:14:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.007 16:14:58 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:32.007 16:14:58 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.007 16:14:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.007 ************************************ 00:06:32.007 START TEST event_reactor 00:06:32.007 ************************************ 00:06:32.007 16:14:58 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:32.007 [2024-06-07 16:14:58.812796] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:32.007 [2024-06-07 16:14:58.812898] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888322 ] 00:06:32.007 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.267 [2024-06-07 16:14:58.877233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.267 [2024-06-07 16:14:58.946939] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.209 test_start 00:06:33.209 oneshot 00:06:33.209 tick 100 00:06:33.209 tick 100 00:06:33.209 tick 250 00:06:33.209 tick 100 00:06:33.209 tick 100 00:06:33.209 tick 250 00:06:33.209 tick 100 00:06:33.209 tick 500 00:06:33.209 tick 100 00:06:33.209 tick 100 00:06:33.209 tick 250 00:06:33.209 tick 100 00:06:33.209 tick 100 00:06:33.209 test_end 00:06:33.209 00:06:33.209 real 0m1.207s 00:06:33.209 user 0m1.130s 00:06:33.209 sys 0m0.073s 00:06:33.209 16:14:59 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.209 16:14:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 ************************************ 00:06:33.209 END TEST event_reactor 00:06:33.209 ************************************ 00:06:33.209 16:15:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:33.209 16:15:00 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:33.209 16:15:00 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.209 16:15:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.469 ************************************ 00:06:33.469 START TEST event_reactor_perf 00:06:33.469 ************************************ 00:06:33.469 16:15:00 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:33.469 [2024-06-07 16:15:00.094468] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:33.469 [2024-06-07 16:15:00.094568] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888551 ] 00:06:33.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.469 [2024-06-07 16:15:00.158132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.469 [2024-06-07 16:15:00.224969] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.853 test_start 00:06:34.853 test_end 00:06:34.853 Performance: 364843 events per second 00:06:34.853 00:06:34.853 real 0m1.207s 00:06:34.853 user 0m1.137s 00:06:34.853 sys 0m0.065s 00:06:34.853 16:15:01 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.853 16:15:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 ************************************ 00:06:34.853 END TEST event_reactor_perf 00:06:34.853 ************************************ 00:06:34.853 16:15:01 event -- event/event.sh@49 -- # uname -s 00:06:34.853 16:15:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:34.853 16:15:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.853 16:15:01 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:34.853 16:15:01 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.853 16:15:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 ************************************ 00:06:34.853 START TEST event_scheduler 00:06:34.853 ************************************ 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.853 * Looking for test storage... 00:06:34.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:34.853 16:15:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.853 16:15:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2888843 00:06:34.853 16:15:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.853 16:15:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.853 16:15:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2888843 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 2888843 ']' 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:34.853 16:15:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.853 [2024-06-07 16:15:01.490008] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:34.853 [2024-06-07 16:15:01.490077] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888843 ] 00:06:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.853 [2024-06-07 16:15:01.550008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.853 [2024-06-07 16:15:01.617068] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.853 [2024-06-07 16:15:01.617192] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.853 [2024-06-07 16:15:01.617312] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.853 [2024-06-07 16:15:01.617314] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.425 16:15:02 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:35.425 16:15:02 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:06:35.425 16:15:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.425 16:15:02 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.425 16:15:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 POWER: Env isn't set yet! 00:06:35.686 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:35.686 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.686 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.686 POWER: Attempting to initialise PSTAT power management... 00:06:35.686 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:35.686 POWER: Initialized successfully for lcore 0 power management 00:06:35.686 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:35.686 POWER: Initialized successfully for lcore 1 power management 00:06:35.686 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:35.686 POWER: Initialized successfully for lcore 2 power management 00:06:35.686 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:35.686 POWER: Initialized successfully for lcore 3 power management 00:06:35.686 [2024-06-07 16:15:02.332782] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.686 [2024-06-07 16:15:02.332794] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.686 [2024-06-07 16:15:02.332800] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.686 16:15:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 [2024-06-07 16:15:02.389695] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.686 16:15:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 ************************************ 00:06:35.686 START TEST scheduler_create_thread 00:06:35.686 ************************************ 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 2 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 3 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 4 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 5 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.686 6 00:06:35.686 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.687 7 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.687 8 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.687 16:15:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.115 9 00:06:37.115 16:15:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:37.115 16:15:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:37.115 16:15:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:37.115 16:15:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.057 10 00:06:38.057 16:15:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:38.057 16:15:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:38.057 16:15:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:38.057 16:15:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.999 16:15:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:38.999 16:15:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:38.999 16:15:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:38.999 16:15:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:38.999 16:15:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.569 16:15:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:39.569 16:15:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:39.569 16:15:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:39.569 16:15:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 16:15:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:39.830 16:15:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:39.830 16:15:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:39.830 16:15:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:39.830 16:15:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.402 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.402 00:06:40.402 real 0m4.727s 00:06:40.402 user 0m0.024s 00:06:40.402 sys 0m0.007s 00:06:40.402 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.402 16:15:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.402 ************************************ 00:06:40.402 END TEST scheduler_create_thread 00:06:40.402 ************************************ 00:06:40.402 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:40.402 16:15:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2888843 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 2888843 ']' 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 2888843 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2888843 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2888843' 00:06:40.402 killing process with pid 2888843 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 2888843 00:06:40.402 16:15:07 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 2888843 00:06:40.663 [2024-06-07 16:15:07.303271] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:40.663 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:40.663 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:40.663 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:40.663 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:40.663 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:40.663 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:40.663 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:40.663 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:40.663 00:06:40.663 real 0m6.162s 00:06:40.663 user 0m15.337s 00:06:40.663 sys 0m0.358s 00:06:40.663 16:15:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.663 16:15:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.663 ************************************ 00:06:40.663 END TEST event_scheduler 00:06:40.663 ************************************ 00:06:40.924 16:15:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:40.924 16:15:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:40.924 16:15:07 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.924 16:15:07 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.924 16:15:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.924 ************************************ 00:06:40.924 START TEST app_repeat 00:06:40.924 ************************************ 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2890233 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2890233' 00:06:40.924 Process app_repeat pid: 2890233 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:40.924 spdk_app_start Round 0 00:06:40.924 16:15:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2890233 /var/tmp/spdk-nbd.sock 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2890233 ']' 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:40.924 16:15:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.924 [2024-06-07 16:15:07.624924] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:40.924 [2024-06-07 16:15:07.624993] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890233 ] 00:06:40.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.924 [2024-06-07 16:15:07.690800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.924 [2024-06-07 16:15:07.765197] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.924 [2024-06-07 16:15:07.765199] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.866 16:15:08 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:41.866 16:15:08 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:41.866 16:15:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.866 Malloc0 00:06:41.866 16:15:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.127 Malloc1 00:06:42.127 16:15:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.127 /dev/nbd0 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.127 1+0 records in 00:06:42.127 1+0 records out 00:06:42.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370958 s, 11.0 MB/s 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:42.127 16:15:08 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.127 16:15:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.387 /dev/nbd1 00:06:42.387 16:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.387 16:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:42.387 16:15:09 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.387 1+0 records in 00:06:42.387 1+0 records out 00:06:42.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271777 s, 15.1 MB/s 00:06:42.388 16:15:09 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.388 16:15:09 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:42.388 16:15:09 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:42.388 16:15:09 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:42.388 16:15:09 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:42.388 16:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.388 16:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.388 16:15:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.388 16:15:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.388 16:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.649 { 00:06:42.649 "nbd_device": "/dev/nbd0", 00:06:42.649 "bdev_name": "Malloc0" 00:06:42.649 }, 00:06:42.649 { 00:06:42.649 "nbd_device": "/dev/nbd1", 00:06:42.649 "bdev_name": "Malloc1" 00:06:42.649 } 00:06:42.649 ]' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.649 { 00:06:42.649 "nbd_device": "/dev/nbd0", 00:06:42.649 "bdev_name": "Malloc0" 00:06:42.649 }, 00:06:42.649 { 00:06:42.649 "nbd_device": "/dev/nbd1", 00:06:42.649 "bdev_name": "Malloc1" 00:06:42.649 } 00:06:42.649 ]' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.649 /dev/nbd1' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.649 /dev/nbd1' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.649 256+0 records in 00:06:42.649 256+0 records out 00:06:42.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115557 s, 90.7 MB/s 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.649 256+0 records in 00:06:42.649 256+0 records out 00:06:42.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178713 s, 58.7 MB/s 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.649 256+0 records in 00:06:42.649 256+0 records out 00:06:42.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208718 s, 50.2 MB/s 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.649 16:15:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.910 16:15:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.171 16:15:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.171 16:15:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.433 16:15:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.694 [2024-06-07 16:15:10.325796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.694 [2024-06-07 16:15:10.391872] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.694 [2024-06-07 16:15:10.391873] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.694 [2024-06-07 16:15:10.423520] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.694 [2024-06-07 16:15:10.423557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.996 16:15:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.996 16:15:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:46.996 spdk_app_start Round 1 00:06:46.996 16:15:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2890233 /var/tmp/spdk-nbd.sock 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2890233 ']' 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:46.996 16:15:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.996 Malloc0 00:06:46.996 16:15:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.996 Malloc1 00:06:46.996 16:15:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.996 /dev/nbd0 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.996 16:15:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:46.996 16:15:13 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.257 1+0 records in 00:06:47.257 1+0 records out 00:06:47.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344207 s, 11.9 MB/s 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:47.257 16:15:13 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:47.257 16:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.257 16:15:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.257 16:15:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.257 /dev/nbd1 00:06:47.257 16:15:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.257 16:15:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:47.257 16:15:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.257 1+0 records in 00:06:47.257 1+0 records out 00:06:47.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280855 s, 14.6 MB/s 00:06:47.258 16:15:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.258 16:15:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:47.258 16:15:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:47.258 16:15:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:47.258 16:15:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:47.258 16:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.258 16:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.258 16:15:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.258 16:15:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.258 16:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.519 { 00:06:47.519 "nbd_device": "/dev/nbd0", 00:06:47.519 "bdev_name": "Malloc0" 00:06:47.519 }, 00:06:47.519 { 00:06:47.519 "nbd_device": "/dev/nbd1", 00:06:47.519 "bdev_name": "Malloc1" 00:06:47.519 } 00:06:47.519 ]' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.519 { 00:06:47.519 "nbd_device": "/dev/nbd0", 00:06:47.519 "bdev_name": "Malloc0" 00:06:47.519 }, 00:06:47.519 { 00:06:47.519 "nbd_device": "/dev/nbd1", 00:06:47.519 "bdev_name": "Malloc1" 00:06:47.519 } 00:06:47.519 ]' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.519 /dev/nbd1' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.519 /dev/nbd1' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.519 256+0 records in 00:06:47.519 256+0 records out 00:06:47.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117484 s, 89.3 MB/s 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.519 256+0 records in 00:06:47.519 256+0 records out 00:06:47.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01608 s, 65.2 MB/s 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.519 256+0 records in 00:06:47.519 256+0 records out 00:06:47.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170674 s, 61.4 MB/s 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.519 16:15:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.520 16:15:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.781 16:15:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.042 16:15:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.042 16:15:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.303 16:15:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.563 [2024-06-07 16:15:15.181760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.563 [2024-06-07 16:15:15.245086] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.563 [2024-06-07 16:15:15.245088] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.563 [2024-06-07 16:15:15.277732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.563 [2024-06-07 16:15:15.277767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.864 16:15:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.864 16:15:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:51.864 spdk_app_start Round 2 00:06:51.864 16:15:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2890233 /var/tmp/spdk-nbd.sock 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2890233 ']' 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:51.864 16:15:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.864 Malloc0 00:06:51.864 16:15:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.864 Malloc1 00:06:51.864 16:15:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.864 /dev/nbd0 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.864 16:15:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.864 1+0 records in 00:06:51.864 1+0 records out 00:06:51.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236949 s, 17.3 MB/s 00:06:51.864 16:15:18 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.127 /dev/nbd1 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.127 1+0 records in 00:06:52.127 1+0 records out 00:06:52.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279861 s, 14.6 MB/s 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:52.127 16:15:18 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.127 16:15:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.389 { 00:06:52.389 "nbd_device": "/dev/nbd0", 00:06:52.389 "bdev_name": "Malloc0" 00:06:52.389 }, 00:06:52.389 { 00:06:52.389 "nbd_device": "/dev/nbd1", 00:06:52.389 "bdev_name": "Malloc1" 00:06:52.389 } 00:06:52.389 ]' 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.389 { 00:06:52.389 "nbd_device": "/dev/nbd0", 00:06:52.389 "bdev_name": "Malloc0" 00:06:52.389 }, 00:06:52.389 { 00:06:52.389 "nbd_device": "/dev/nbd1", 00:06:52.389 "bdev_name": "Malloc1" 00:06:52.389 } 00:06:52.389 ]' 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.389 /dev/nbd1' 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.389 /dev/nbd1' 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.389 16:15:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.390 256+0 records in 00:06:52.390 256+0 records out 00:06:52.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116847 s, 89.7 MB/s 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.390 256+0 records in 00:06:52.390 256+0 records out 00:06:52.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158037 s, 66.3 MB/s 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.390 256+0 records in 00:06:52.390 256+0 records out 00:06:52.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169534 s, 61.9 MB/s 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.390 16:15:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.650 16:15:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.910 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.171 16:15:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.171 16:15:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.171 16:15:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.171 16:15:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.171 16:15:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.171 16:15:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.171 16:15:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.432 [2024-06-07 16:15:20.059056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.432 [2024-06-07 16:15:20.124232] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.432 [2024-06-07 16:15:20.124233] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.432 [2024-06-07 16:15:20.155989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.432 [2024-06-07 16:15:20.156023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.793 16:15:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2890233 /var/tmp/spdk-nbd.sock 00:06:56.793 16:15:22 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2890233 ']' 00:06:56.793 16:15:22 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.793 16:15:22 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:56.793 16:15:22 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.793 16:15:22 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:56.793 16:15:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:56.793 16:15:23 event.app_repeat -- event/event.sh@39 -- # killprocess 2890233 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 2890233 ']' 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 2890233 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2890233 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2890233' 00:06:56.793 killing process with pid 2890233 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@968 -- # kill 2890233 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@973 -- # wait 2890233 00:06:56.793 spdk_app_start is called in Round 0. 00:06:56.793 Shutdown signal received, stop current app iteration 00:06:56.793 Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 reinitialization... 00:06:56.793 spdk_app_start is called in Round 1. 00:06:56.793 Shutdown signal received, stop current app iteration 00:06:56.793 Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 reinitialization... 00:06:56.793 spdk_app_start is called in Round 2. 00:06:56.793 Shutdown signal received, stop current app iteration 00:06:56.793 Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 reinitialization... 00:06:56.793 spdk_app_start is called in Round 3. 00:06:56.793 Shutdown signal received, stop current app iteration 00:06:56.793 16:15:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.793 16:15:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:56.793 00:06:56.793 real 0m15.661s 00:06:56.793 user 0m33.751s 00:06:56.793 sys 0m2.085s 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.793 16:15:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.793 ************************************ 00:06:56.793 END TEST app_repeat 00:06:56.793 ************************************ 00:06:56.793 16:15:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.793 16:15:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:56.794 16:15:23 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.794 16:15:23 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.794 16:15:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.794 ************************************ 00:06:56.794 START TEST cpu_locks 00:06:56.794 ************************************ 00:06:56.794 16:15:23 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:56.794 * Looking for test storage... 00:06:56.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:56.794 16:15:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.794 16:15:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.794 16:15:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.794 16:15:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.794 16:15:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.794 16:15:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.794 16:15:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.794 ************************************ 00:06:56.794 START TEST default_locks 00:06:56.794 ************************************ 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2893968 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2893968 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2893968 ']' 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:56.794 16:15:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.794 [2024-06-07 16:15:23.514884] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:56.794 [2024-06-07 16:15:23.514942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893968 ] 00:06:56.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.794 [2024-06-07 16:15:23.578359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.054 [2024-06-07 16:15:23.650747] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.625 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:57.625 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:57.625 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2893968 00:06:57.625 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2893968 00:06:57.625 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.885 lslocks: write error 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2893968 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 2893968 ']' 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 2893968 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2893968 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2893968' 00:06:57.885 killing process with pid 2893968 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 2893968 00:06:57.885 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 2893968 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2893968 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2893968 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2893968 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2893968 ']' 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.146 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2893968) - No such process 00:06:58.146 ERROR: process (pid: 2893968) is no longer running 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.147 00:06:58.147 real 0m1.474s 00:06:58.147 user 0m1.577s 00:06:58.147 sys 0m0.486s 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.147 16:15:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.147 ************************************ 00:06:58.147 END TEST default_locks 00:06:58.147 ************************************ 00:06:58.147 16:15:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:58.147 16:15:24 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:58.147 16:15:24 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.147 16:15:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.407 ************************************ 00:06:58.407 START TEST default_locks_via_rpc 00:06:58.407 ************************************ 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2894319 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2894319 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2894319 ']' 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.407 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.407 [2024-06-07 16:15:25.040880] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:58.407 [2024-06-07 16:15:25.040925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894319 ] 00:06:58.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.407 [2024-06-07 16:15:25.098117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.407 [2024-06-07 16:15:25.163334] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.980 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2894319 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2894319 00:06:58.981 16:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2894319 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 2894319 ']' 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 2894319 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2894319 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2894319' 00:06:59.551 killing process with pid 2894319 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 2894319 00:06:59.551 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 2894319 00:06:59.911 00:06:59.911 real 0m1.448s 00:06:59.911 user 0m1.564s 00:06:59.911 sys 0m0.460s 00:06:59.911 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.911 16:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.911 ************************************ 00:06:59.911 END TEST default_locks_via_rpc 00:06:59.911 ************************************ 00:06:59.911 16:15:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.911 16:15:26 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:59.911 16:15:26 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.911 16:15:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.911 ************************************ 00:06:59.911 START TEST non_locking_app_on_locked_coremask 00:06:59.911 ************************************ 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2894681 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2894681 /var/tmp/spdk.sock 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2894681 ']' 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:59.911 16:15:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.911 [2024-06-07 16:15:26.563863] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:06:59.911 [2024-06-07 16:15:26.563909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894681 ] 00:06:59.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.911 [2024-06-07 16:15:26.620798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.911 [2024-06-07 16:15:26.684812] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2894957 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2894957 /var/tmp/spdk2.sock 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2894957 ']' 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:00.482 16:15:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.742 [2024-06-07 16:15:27.353464] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:00.742 [2024-06-07 16:15:27.353516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894957 ] 00:07:00.742 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.742 [2024-06-07 16:15:27.444733] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.742 [2024-06-07 16:15:27.444762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.742 [2024-06-07 16:15:27.574043] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.313 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:01.313 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:01.313 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2894681 00:07:01.313 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2894681 00:07:01.313 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.884 lslocks: write error 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2894681 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2894681 ']' 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2894681 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2894681 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2894681' 00:07:01.884 killing process with pid 2894681 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2894681 00:07:01.884 16:15:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2894681 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2894957 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2894957 ']' 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2894957 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2894957 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2894957' 00:07:02.455 killing process with pid 2894957 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2894957 00:07:02.455 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2894957 00:07:02.716 00:07:02.716 real 0m2.815s 00:07:02.716 user 0m3.078s 00:07:02.716 sys 0m0.815s 00:07:02.716 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.716 16:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 ************************************ 00:07:02.716 END TEST non_locking_app_on_locked_coremask 00:07:02.716 ************************************ 00:07:02.716 16:15:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:02.716 16:15:29 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.716 16:15:29 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.716 16:15:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 ************************************ 00:07:02.716 START TEST locking_app_on_unlocked_coremask 00:07:02.716 ************************************ 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2895389 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2895389 /var/tmp/spdk.sock 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2895389 ']' 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:02.716 16:15:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 [2024-06-07 16:15:29.438431] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:02.716 [2024-06-07 16:15:29.438481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895389 ] 00:07:02.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.716 [2024-06-07 16:15:29.498035] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.716 [2024-06-07 16:15:29.498065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.716 [2024-06-07 16:15:29.567210] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2895420 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2895420 /var/tmp/spdk2.sock 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2895420 ']' 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:03.657 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.657 [2024-06-07 16:15:30.254512] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:03.657 [2024-06-07 16:15:30.254565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895420 ] 00:07:03.657 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.657 [2024-06-07 16:15:30.342869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.657 [2024-06-07 16:15:30.476362] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.226 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:04.226 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:04.226 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2895420 00:07:04.226 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.226 16:15:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2895420 00:07:04.797 lslocks: write error 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2895389 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2895389 ']' 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2895389 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2895389 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2895389' 00:07:04.797 killing process with pid 2895389 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2895389 00:07:04.797 16:15:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2895389 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2895420 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2895420 ']' 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2895420 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2895420 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2895420' 00:07:05.369 killing process with pid 2895420 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2895420 00:07:05.369 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2895420 00:07:05.630 00:07:05.630 real 0m2.933s 00:07:05.630 user 0m3.173s 00:07:05.630 sys 0m0.904s 00:07:05.630 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.630 16:15:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.630 ************************************ 00:07:05.630 END TEST locking_app_on_unlocked_coremask 00:07:05.630 ************************************ 00:07:05.630 16:15:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:05.630 16:15:32 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:05.630 16:15:32 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.630 16:15:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.630 ************************************ 00:07:05.630 START TEST locking_app_on_locked_coremask 00:07:05.630 ************************************ 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2896046 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2896046 /var/tmp/spdk.sock 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2896046 ']' 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:05.631 16:15:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.631 [2024-06-07 16:15:32.443776] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:05.631 [2024-06-07 16:15:32.443829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896046 ] 00:07:05.631 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.891 [2024-06-07 16:15:32.504220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.891 [2024-06-07 16:15:32.574610] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2896107 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2896107 /var/tmp/spdk2.sock 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2896107 /var/tmp/spdk2.sock 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2896107 /var/tmp/spdk2.sock 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2896107 ']' 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.463 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:06.464 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.464 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:06.464 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.464 [2024-06-07 16:15:33.261833] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:06.464 [2024-06-07 16:15:33.261887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896107 ] 00:07:06.464 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.724 [2024-06-07 16:15:33.352041] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2896046 has claimed it. 00:07:06.724 [2024-06-07 16:15:33.352082] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2896107) - No such process 00:07:07.296 ERROR: process (pid: 2896107) is no longer running 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2896046 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2896046 00:07:07.296 16:15:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.556 lslocks: write error 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2896046 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2896046 ']' 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2896046 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2896046 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2896046' 00:07:07.556 killing process with pid 2896046 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2896046 00:07:07.556 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2896046 00:07:07.816 00:07:07.816 real 0m2.205s 00:07:07.816 user 0m2.461s 00:07:07.816 sys 0m0.589s 00:07:07.816 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.816 16:15:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.816 ************************************ 00:07:07.816 END TEST locking_app_on_locked_coremask 00:07:07.816 ************************************ 00:07:07.816 16:15:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:07.816 16:15:34 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:07.816 16:15:34 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.816 16:15:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.816 ************************************ 00:07:07.816 START TEST locking_overlapped_coremask 00:07:07.816 ************************************ 00:07:07.816 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:07:07.816 16:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2896469 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2896469 /var/tmp/spdk.sock 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2896469 ']' 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:08.077 16:15:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.077 [2024-06-07 16:15:34.721652] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:08.077 [2024-06-07 16:15:34.721698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896469 ] 00:07:08.077 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.077 [2024-06-07 16:15:34.782782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.077 [2024-06-07 16:15:34.849068] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.077 [2024-06-07 16:15:34.849182] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.077 [2024-06-07 16:15:34.849185] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2896486 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2896486 /var/tmp/spdk2.sock 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2896486 /var/tmp/spdk2.sock 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:08.338 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2896486 /var/tmp/spdk2.sock 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2896486 ']' 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:08.339 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.339 [2024-06-07 16:15:35.072519] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:08.339 [2024-06-07 16:15:35.072573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896486 ] 00:07:08.339 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.339 [2024-06-07 16:15:35.144813] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2896469 has claimed it. 00:07:08.339 [2024-06-07 16:15:35.144843] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2896486) - No such process 00:07:08.912 ERROR: process (pid: 2896486) is no longer running 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2896469 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 2896469 ']' 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 2896469 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2896469 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2896469' 00:07:08.912 killing process with pid 2896469 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 2896469 00:07:08.912 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 2896469 00:07:09.174 00:07:09.174 real 0m1.272s 00:07:09.174 user 0m3.487s 00:07:09.174 sys 0m0.316s 00:07:09.174 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:09.174 16:15:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.174 ************************************ 00:07:09.174 END TEST locking_overlapped_coremask 00:07:09.174 ************************************ 00:07:09.174 16:15:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.174 16:15:35 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:09.174 16:15:35 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:09.174 16:15:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.174 ************************************ 00:07:09.174 START TEST locking_overlapped_coremask_via_rpc 00:07:09.174 ************************************ 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2896844 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2896844 /var/tmp/spdk.sock 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2896844 ']' 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:09.174 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.435 [2024-06-07 16:15:36.072464] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:09.435 [2024-06-07 16:15:36.072519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896844 ] 00:07:09.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.435 [2024-06-07 16:15:36.138527] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.435 [2024-06-07 16:15:36.138562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.435 [2024-06-07 16:15:36.211087] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.435 [2024-06-07 16:15:36.211172] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.435 [2024-06-07 16:15:36.211176] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2896852 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2896852 /var/tmp/spdk2.sock 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2896852 ']' 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:10.007 16:15:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.269 [2024-06-07 16:15:36.899385] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:10.269 [2024-06-07 16:15:36.899445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896852 ] 00:07:10.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.269 [2024-06-07 16:15:36.972150] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.269 [2024-06-07 16:15:36.972173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.269 [2024-06-07 16:15:37.081716] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.269 [2024-06-07 16:15:37.081834] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.269 [2024-06-07 16:15:37.081836] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.841 [2024-06-07 16:15:37.672466] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2896844 has claimed it. 00:07:10.841 request: 00:07:10.841 { 00:07:10.841 "method": "framework_enable_cpumask_locks", 00:07:10.841 "req_id": 1 00:07:10.841 } 00:07:10.841 Got JSON-RPC error response 00:07:10.841 response: 00:07:10.841 { 00:07:10.841 "code": -32603, 00:07:10.841 "message": "Failed to claim CPU core: 2" 00:07:10.841 } 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2896844 /var/tmp/spdk.sock 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2896844 ']' 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:10.841 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2896852 /var/tmp/spdk2.sock 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2896852 ']' 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:11.102 16:15:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.363 00:07:11.363 real 0m1.998s 00:07:11.363 user 0m0.765s 00:07:11.363 sys 0m0.158s 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.363 16:15:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.363 ************************************ 00:07:11.363 END TEST locking_overlapped_coremask_via_rpc 00:07:11.363 ************************************ 00:07:11.363 16:15:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:11.363 16:15:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2896844 ]] 00:07:11.363 16:15:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2896844 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2896844 ']' 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2896844 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2896844 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2896844' 00:07:11.363 killing process with pid 2896844 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2896844 00:07:11.363 16:15:38 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2896844 00:07:11.624 16:15:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2896852 ]] 00:07:11.624 16:15:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2896852 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2896852 ']' 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2896852 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2896852 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2896852' 00:07:11.624 killing process with pid 2896852 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2896852 00:07:11.624 16:15:38 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2896852 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2896844 ]] 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2896844 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2896844 ']' 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2896844 00:07:11.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2896844) - No such process 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2896844 is not found' 00:07:11.886 Process with pid 2896844 is not found 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2896852 ]] 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2896852 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2896852 ']' 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2896852 00:07:11.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2896852) - No such process 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2896852 is not found' 00:07:11.886 Process with pid 2896852 is not found 00:07:11.886 16:15:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.886 00:07:11.886 real 0m15.251s 00:07:11.886 user 0m25.598s 00:07:11.886 sys 0m4.575s 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.886 16:15:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.886 ************************************ 00:07:11.886 END TEST cpu_locks 00:07:11.886 ************************************ 00:07:11.886 00:07:11.886 real 0m41.250s 00:07:11.886 user 1m21.291s 00:07:11.886 sys 0m7.606s 00:07:11.886 16:15:38 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.886 16:15:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.886 ************************************ 00:07:11.886 END TEST event 00:07:11.886 ************************************ 00:07:11.886 16:15:38 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:11.886 16:15:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:11.886 16:15:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.886 16:15:38 -- common/autotest_common.sh@10 -- # set +x 00:07:11.886 ************************************ 00:07:11.886 START TEST thread 00:07:11.886 ************************************ 00:07:11.886 16:15:38 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:12.148 * Looking for test storage... 00:07:12.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:12.148 16:15:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:12.148 16:15:38 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:12.148 16:15:38 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:12.148 16:15:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.148 ************************************ 00:07:12.148 START TEST thread_poller_perf 00:07:12.148 ************************************ 00:07:12.148 16:15:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:12.148 [2024-06-07 16:15:38.847216] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:12.148 [2024-06-07 16:15:38.847327] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897388 ] 00:07:12.148 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.148 [2024-06-07 16:15:38.913752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.148 [2024-06-07 16:15:38.987754] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.148 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:13.535 ====================================== 00:07:13.535 busy:2412509078 (cyc) 00:07:13.535 total_run_count: 288000 00:07:13.535 tsc_hz: 2400000000 (cyc) 00:07:13.535 ====================================== 00:07:13.535 poller_cost: 8376 (cyc), 3490 (nsec) 00:07:13.535 00:07:13.535 real 0m1.226s 00:07:13.535 user 0m1.149s 00:07:13.535 sys 0m0.072s 00:07:13.535 16:15:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.535 16:15:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.535 ************************************ 00:07:13.535 END TEST thread_poller_perf 00:07:13.535 ************************************ 00:07:13.535 16:15:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.535 16:15:40 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:13.535 16:15:40 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.535 16:15:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.535 ************************************ 00:07:13.535 START TEST thread_poller_perf 00:07:13.535 ************************************ 00:07:13.535 16:15:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.535 [2024-06-07 16:15:40.133043] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:13.535 [2024-06-07 16:15:40.133141] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897644 ] 00:07:13.535 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.535 [2024-06-07 16:15:40.196686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.535 [2024-06-07 16:15:40.261168] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.535 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:14.476 ====================================== 00:07:14.476 busy:2402077150 (cyc) 00:07:14.476 total_run_count: 3812000 00:07:14.476 tsc_hz: 2400000000 (cyc) 00:07:14.476 ====================================== 00:07:14.476 poller_cost: 630 (cyc), 262 (nsec) 00:07:14.476 00:07:14.476 real 0m1.205s 00:07:14.476 user 0m1.131s 00:07:14.476 sys 0m0.070s 00:07:14.476 16:15:41 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.476 16:15:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.476 ************************************ 00:07:14.476 END TEST thread_poller_perf 00:07:14.476 ************************************ 00:07:14.737 16:15:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:14.737 00:07:14.737 real 0m2.665s 00:07:14.737 user 0m2.377s 00:07:14.737 sys 0m0.296s 00:07:14.737 16:15:41 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:14.737 16:15:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 ************************************ 00:07:14.737 END TEST thread 00:07:14.737 ************************************ 00:07:14.737 16:15:41 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:14.737 16:15:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:14.737 16:15:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:14.737 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 ************************************ 00:07:14.737 START TEST accel 00:07:14.737 ************************************ 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:14.737 * Looking for test storage... 00:07:14.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:14.737 16:15:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:14.737 16:15:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:14.737 16:15:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:14.737 16:15:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2898039 00:07:14.737 16:15:41 accel -- accel/accel.sh@63 -- # waitforlisten 2898039 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@830 -- # '[' -z 2898039 ']' 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:14.737 16:15:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.737 16:15:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:14.737 16:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.737 16:15:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.737 16:15:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.737 16:15:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.737 16:15:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.737 16:15:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.737 16:15:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:14.737 16:15:41 accel -- accel/accel.sh@41 -- # jq -r . 00:07:14.737 [2024-06-07 16:15:41.585497] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:14.737 [2024-06-07 16:15:41.585549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898039 ] 00:07:14.998 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.998 [2024-06-07 16:15:41.646607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.998 [2024-06-07 16:15:41.710968] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.570 16:15:42 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:15.570 16:15:42 accel -- common/autotest_common.sh@863 -- # return 0 00:07:15.570 16:15:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:15.570 16:15:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:15.570 16:15:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:15.570 16:15:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:15.570 16:15:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:15.570 16:15:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:15.570 16:15:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:15.570 16:15:42 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.570 16:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.570 16:15:42 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.570 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.570 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.570 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.831 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.831 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.831 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.831 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.831 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.831 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.831 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.831 16:15:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.831 16:15:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.831 16:15:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.831 16:15:42 accel -- accel/accel.sh@75 -- # killprocess 2898039 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@949 -- # '[' -z 2898039 ']' 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@953 -- # kill -0 2898039 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@954 -- # uname 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2898039 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2898039' 00:07:15.831 killing process with pid 2898039 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@968 -- # kill 2898039 00:07:15.831 16:15:42 accel -- common/autotest_common.sh@973 -- # wait 2898039 00:07:16.092 16:15:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:16.092 16:15:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:16.092 16:15:42 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:16.092 16:15:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.092 16:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.092 16:15:42 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:16.092 16:15:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:16.092 16:15:42 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.092 16:15:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:16.092 16:15:42 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:16.092 16:15:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:16.092 16:15:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.092 16:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.092 ************************************ 00:07:16.092 START TEST accel_missing_filename 00:07:16.092 ************************************ 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.092 16:15:42 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:16.092 16:15:42 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:16.092 [2024-06-07 16:15:42.855988] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:16.092 [2024-06-07 16:15:42.856060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898401 ] 00:07:16.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.092 [2024-06-07 16:15:42.917171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.354 [2024-06-07 16:15:42.983774] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.354 [2024-06-07 16:15:43.015585] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.354 [2024-06-07 16:15:43.052362] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:16.354 A filename is required. 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:16.354 00:07:16.354 real 0m0.279s 00:07:16.354 user 0m0.218s 00:07:16.354 sys 0m0.100s 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.354 16:15:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:16.354 ************************************ 00:07:16.354 END TEST accel_missing_filename 00:07:16.354 ************************************ 00:07:16.354 16:15:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.354 16:15:43 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:16.354 16:15:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.354 16:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.354 ************************************ 00:07:16.354 START TEST accel_compress_verify 00:07:16.354 ************************************ 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.354 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:16.354 16:15:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:16.354 [2024-06-07 16:15:43.207728] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:16.354 [2024-06-07 16:15:43.207821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898432 ] 00:07:16.614 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.615 [2024-06-07 16:15:43.270812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.615 [2024-06-07 16:15:43.339565] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.615 [2024-06-07 16:15:43.371462] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.615 [2024-06-07 16:15:43.407952] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:16.615 00:07:16.615 Compression does not support the verify option, aborting. 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:16.615 00:07:16.615 real 0m0.285s 00:07:16.615 user 0m0.228s 00:07:16.615 sys 0m0.098s 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.615 16:15:43 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:16.615 ************************************ 00:07:16.615 END TEST accel_compress_verify 00:07:16.615 ************************************ 00:07:16.876 16:15:43 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:16.876 16:15:43 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:16.876 16:15:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.876 16:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.876 ************************************ 00:07:16.876 START TEST accel_wrong_workload 00:07:16.876 ************************************ 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:16.876 16:15:43 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:16.876 Unsupported workload type: foobar 00:07:16.876 [2024-06-07 16:15:43.564300] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:16.876 accel_perf options: 00:07:16.876 [-h help message] 00:07:16.876 [-q queue depth per core] 00:07:16.876 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:16.876 [-T number of threads per core 00:07:16.876 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:16.876 [-t time in seconds] 00:07:16.876 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:16.876 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:16.876 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:16.876 [-l for compress/decompress workloads, name of uncompressed input file 00:07:16.876 [-S for crc32c workload, use this seed value (default 0) 00:07:16.876 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:16.876 [-f for fill workload, use this BYTE value (default 255) 00:07:16.876 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:16.876 [-y verify result if this switch is on] 00:07:16.876 [-a tasks to allocate per core (default: same value as -q)] 00:07:16.876 Can be used to spread operations across a wider range of memory. 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:16.876 00:07:16.876 real 0m0.036s 00:07:16.876 user 0m0.017s 00:07:16.876 sys 0m0.019s 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.876 16:15:43 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:16.876 ************************************ 00:07:16.876 END TEST accel_wrong_workload 00:07:16.876 ************************************ 00:07:16.876 Error: writing output failed: Broken pipe 00:07:16.876 16:15:43 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:16.876 16:15:43 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:16.876 16:15:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.876 16:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.876 ************************************ 00:07:16.876 START TEST accel_negative_buffers 00:07:16.876 ************************************ 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:16.876 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:16.876 16:15:43 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:16.876 -x option must be non-negative. 00:07:16.876 [2024-06-07 16:15:43.677270] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:16.876 accel_perf options: 00:07:16.876 [-h help message] 00:07:16.876 [-q queue depth per core] 00:07:16.876 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:16.876 [-T number of threads per core 00:07:16.876 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:16.876 [-t time in seconds] 00:07:16.877 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:16.877 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:16.877 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:16.877 [-l for compress/decompress workloads, name of uncompressed input file 00:07:16.877 [-S for crc32c workload, use this seed value (default 0) 00:07:16.877 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:16.877 [-f for fill workload, use this BYTE value (default 255) 00:07:16.877 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:16.877 [-y verify result if this switch is on] 00:07:16.877 [-a tasks to allocate per core (default: same value as -q)] 00:07:16.877 Can be used to spread operations across a wider range of memory. 00:07:16.877 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:16.877 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:16.877 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:16.877 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:16.877 00:07:16.877 real 0m0.038s 00:07:16.877 user 0m0.021s 00:07:16.877 sys 0m0.017s 00:07:16.877 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.877 16:15:43 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:16.877 ************************************ 00:07:16.877 END TEST accel_negative_buffers 00:07:16.877 ************************************ 00:07:16.877 Error: writing output failed: Broken pipe 00:07:16.877 16:15:43 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:16.877 16:15:43 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:16.877 16:15:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.877 16:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.138 ************************************ 00:07:17.138 START TEST accel_crc32c 00:07:17.139 ************************************ 00:07:17.139 16:15:43 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:17.139 [2024-06-07 16:15:43.784312] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:17.139 [2024-06-07 16:15:43.784423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898514 ] 00:07:17.139 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.139 [2024-06-07 16:15:43.849372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.139 [2024-06-07 16:15:43.923501] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.139 16:15:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:18.526 16:15:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.526 00:07:18.526 real 0m1.298s 00:07:18.526 user 0m1.204s 00:07:18.526 sys 0m0.105s 00:07:18.526 16:15:45 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.526 16:15:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:18.526 ************************************ 00:07:18.526 END TEST accel_crc32c 00:07:18.526 ************************************ 00:07:18.526 16:15:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:18.526 16:15:45 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:18.527 16:15:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.527 16:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.527 ************************************ 00:07:18.527 START TEST accel_crc32c_C2 00:07:18.527 ************************************ 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:18.527 [2024-06-07 16:15:45.153356] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:18.527 [2024-06-07 16:15:45.153462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898850 ] 00:07:18.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.527 [2024-06-07 16:15:45.216816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.527 [2024-06-07 16:15:45.280091] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.527 16:15:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.912 00:07:19.912 real 0m1.287s 00:07:19.912 user 0m1.202s 00:07:19.912 sys 0m0.098s 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.912 16:15:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:19.912 ************************************ 00:07:19.912 END TEST accel_crc32c_C2 00:07:19.912 ************************************ 00:07:19.912 16:15:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:19.912 16:15:46 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:19.912 16:15:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:19.912 16:15:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.912 ************************************ 00:07:19.912 START TEST accel_copy 00:07:19.912 ************************************ 00:07:19.912 16:15:46 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:19.912 [2024-06-07 16:15:46.509298] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:19.912 [2024-06-07 16:15:46.509356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899197 ] 00:07:19.912 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.912 [2024-06-07 16:15:46.569374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.912 [2024-06-07 16:15:46.633508] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:19.912 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.913 16:15:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:21.320 16:15:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.320 00:07:21.320 real 0m1.281s 00:07:21.320 user 0m1.198s 00:07:21.320 sys 0m0.093s 00:07:21.320 16:15:47 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:21.320 16:15:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:21.320 ************************************ 00:07:21.320 END TEST accel_copy 00:07:21.321 ************************************ 00:07:21.321 16:15:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.321 16:15:47 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:21.321 16:15:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:21.321 16:15:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.321 ************************************ 00:07:21.321 START TEST accel_fill 00:07:21.321 ************************************ 00:07:21.321 16:15:47 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:21.321 16:15:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:21.321 [2024-06-07 16:15:47.865969] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:21.321 [2024-06-07 16:15:47.866068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899548 ] 00:07:21.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.321 [2024-06-07 16:15:47.932067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.321 [2024-06-07 16:15:48.002086] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 16:15:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:22.707 16:15:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.707 00:07:22.707 real 0m1.299s 00:07:22.707 user 0m1.203s 00:07:22.707 sys 0m0.107s 00:07:22.707 16:15:49 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.707 16:15:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 ************************************ 00:07:22.707 END TEST accel_fill 00:07:22.707 ************************************ 00:07:22.707 16:15:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:22.707 16:15:49 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:22.707 16:15:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.707 16:15:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 ************************************ 00:07:22.707 START TEST accel_copy_crc32c 00:07:22.707 ************************************ 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:22.707 [2024-06-07 16:15:49.230820] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:22.707 [2024-06-07 16:15:49.230911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899772 ] 00:07:22.707 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.707 [2024-06-07 16:15:49.301423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.707 [2024-06-07 16:15:49.367486] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.707 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.708 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.708 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.708 16:15:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.649 00:07:23.649 real 0m1.295s 00:07:23.649 user 0m1.208s 00:07:23.649 sys 0m0.098s 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:23.649 16:15:50 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:23.649 ************************************ 00:07:23.649 END TEST accel_copy_crc32c 00:07:23.649 ************************************ 00:07:23.921 16:15:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:23.921 16:15:50 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:23.921 16:15:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:23.921 16:15:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.921 ************************************ 00:07:23.921 START TEST accel_copy_crc32c_C2 00:07:23.921 ************************************ 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:23.921 [2024-06-07 16:15:50.596365] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:23.921 [2024-06-07 16:15:50.596443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899959 ] 00:07:23.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.921 [2024-06-07 16:15:50.658547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.921 [2024-06-07 16:15:50.727741] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.921 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.188 16:15:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.129 00:07:25.129 real 0m1.289s 00:07:25.129 user 0m1.201s 00:07:25.129 sys 0m0.101s 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:25.129 16:15:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:25.129 ************************************ 00:07:25.129 END TEST accel_copy_crc32c_C2 00:07:25.129 ************************************ 00:07:25.129 16:15:51 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:25.129 16:15:51 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:25.129 16:15:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.129 16:15:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.129 ************************************ 00:07:25.129 START TEST accel_dualcast 00:07:25.129 ************************************ 00:07:25.129 16:15:51 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:25.129 16:15:51 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:25.129 [2024-06-07 16:15:51.954411] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:25.130 [2024-06-07 16:15:51.954487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900285 ] 00:07:25.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.391 [2024-06-07 16:15:52.014495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.391 [2024-06-07 16:15:52.077481] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.391 16:15:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:26.776 16:15:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.776 00:07:26.776 real 0m1.279s 00:07:26.776 user 0m1.190s 00:07:26.776 sys 0m0.099s 00:07:26.776 16:15:53 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.776 16:15:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 ************************************ 00:07:26.776 END TEST accel_dualcast 00:07:26.776 ************************************ 00:07:26.776 16:15:53 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:26.776 16:15:53 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:26.776 16:15:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.776 16:15:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.776 ************************************ 00:07:26.776 START TEST accel_compare 00:07:26.776 ************************************ 00:07:26.776 16:15:53 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:26.776 [2024-06-07 16:15:53.305979] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:26.776 [2024-06-07 16:15:53.306070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900642 ] 00:07:26.776 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.776 [2024-06-07 16:15:53.367460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.776 [2024-06-07 16:15:53.434372] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.776 16:15:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 16:15:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:27.722 16:15:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.722 00:07:27.722 real 0m1.286s 00:07:27.722 user 0m1.195s 00:07:27.722 sys 0m0.102s 00:07:27.722 16:15:54 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.722 16:15:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:27.722 ************************************ 00:07:27.722 END TEST accel_compare 00:07:27.722 ************************************ 00:07:28.021 16:15:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:28.021 16:15:54 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:28.021 16:15:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.021 16:15:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.021 ************************************ 00:07:28.021 START TEST accel_xor 00:07:28.021 ************************************ 00:07:28.021 16:15:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:28.021 [2024-06-07 16:15:54.659259] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:28.021 [2024-06-07 16:15:54.659347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900990 ] 00:07:28.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.021 [2024-06-07 16:15:54.719059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.021 [2024-06-07 16:15:54.782202] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.021 16:15:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.458 16:15:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.459 00:07:29.459 real 0m1.281s 00:07:29.459 user 0m1.199s 00:07:29.459 sys 0m0.093s 00:07:29.459 16:15:55 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:29.459 16:15:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:29.459 ************************************ 00:07:29.459 END TEST accel_xor 00:07:29.459 ************************************ 00:07:29.459 16:15:55 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:29.459 16:15:55 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:29.459 16:15:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.459 16:15:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.459 ************************************ 00:07:29.459 START TEST accel_xor 00:07:29.459 ************************************ 00:07:29.459 16:15:55 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:29.459 16:15:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:29.459 [2024-06-07 16:15:56.007502] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:29.459 [2024-06-07 16:15:56.007585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901198 ] 00:07:29.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.459 [2024-06-07 16:15:56.068166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.459 [2024-06-07 16:15:56.132523] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.459 16:15:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.403 16:15:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.403 16:15:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.403 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.403 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.403 16:15:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:30.665 16:15:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.665 00:07:30.665 real 0m1.282s 00:07:30.665 user 0m1.205s 00:07:30.665 sys 0m0.089s 00:07:30.665 16:15:57 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.665 16:15:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:30.665 ************************************ 00:07:30.665 END TEST accel_xor 00:07:30.665 ************************************ 00:07:30.665 16:15:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:30.665 16:15:57 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:30.665 16:15:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.665 16:15:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.665 ************************************ 00:07:30.665 START TEST accel_dif_verify 00:07:30.665 ************************************ 00:07:30.665 16:15:57 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:30.665 [2024-06-07 16:15:57.341490] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:30.665 [2024-06-07 16:15:57.341552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901393 ] 00:07:30.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.665 [2024-06-07 16:15:57.402679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.665 [2024-06-07 16:15:57.469029] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.665 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.666 16:15:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.048 16:15:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.048 16:15:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.048 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.048 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:32.049 16:15:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.049 00:07:32.049 real 0m1.285s 00:07:32.049 user 0m1.200s 00:07:32.049 sys 0m0.097s 00:07:32.049 16:15:58 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.049 16:15:58 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:32.049 ************************************ 00:07:32.049 END TEST accel_dif_verify 00:07:32.049 ************************************ 00:07:32.049 16:15:58 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:32.049 16:15:58 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:32.049 16:15:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.049 16:15:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.049 ************************************ 00:07:32.049 START TEST accel_dif_generate 00:07:32.049 ************************************ 00:07:32.049 16:15:58 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:32.049 [2024-06-07 16:15:58.697887] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:32.049 [2024-06-07 16:15:58.697978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901731 ] 00:07:32.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.049 [2024-06-07 16:15:58.759654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.049 [2024-06-07 16:15:58.823671] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:32.049 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.050 16:15:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.432 16:15:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.433 16:15:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:33.433 16:15:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.433 00:07:33.433 real 0m1.284s 00:07:33.433 user 0m1.199s 00:07:33.433 sys 0m0.097s 00:07:33.433 16:15:59 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:33.433 16:15:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:33.433 ************************************ 00:07:33.433 END TEST accel_dif_generate 00:07:33.433 ************************************ 00:07:33.433 16:15:59 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:33.433 16:15:59 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:33.433 16:15:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:33.433 16:15:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.433 ************************************ 00:07:33.433 START TEST accel_dif_generate_copy 00:07:33.433 ************************************ 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:33.433 [2024-06-07 16:16:00.055179] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:33.433 [2024-06-07 16:16:00.055244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902083 ] 00:07:33.433 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.433 [2024-06-07 16:16:00.116712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.433 [2024-06-07 16:16:00.184168] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.433 16:16:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.817 00:07:34.817 real 0m1.286s 00:07:34.817 user 0m1.198s 00:07:34.817 sys 0m0.099s 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.817 16:16:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.817 ************************************ 00:07:34.817 END TEST accel_dif_generate_copy 00:07:34.817 ************************************ 00:07:34.817 16:16:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:34.817 16:16:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.817 16:16:01 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:34.817 16:16:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.817 16:16:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.817 ************************************ 00:07:34.817 START TEST accel_comp 00:07:34.817 ************************************ 00:07:34.817 16:16:01 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:34.817 [2024-06-07 16:16:01.410049] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:34.817 [2024-06-07 16:16:01.410112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902436 ] 00:07:34.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.817 [2024-06-07 16:16:01.469825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.817 [2024-06-07 16:16:01.533453] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.817 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.818 16:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.201 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:36.202 16:16:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.202 00:07:36.202 real 0m1.282s 00:07:36.202 user 0m1.198s 00:07:36.202 sys 0m0.098s 00:07:36.202 16:16:02 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.202 16:16:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:36.202 ************************************ 00:07:36.202 END TEST accel_comp 00:07:36.202 ************************************ 00:07:36.202 16:16:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.202 16:16:02 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:36.202 16:16:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.202 16:16:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.202 ************************************ 00:07:36.202 START TEST accel_decomp 00:07:36.202 ************************************ 00:07:36.202 16:16:02 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:36.202 [2024-06-07 16:16:02.766243] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:36.202 [2024-06-07 16:16:02.766333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902639 ] 00:07:36.202 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.202 [2024-06-07 16:16:02.828561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.202 [2024-06-07 16:16:02.897855] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.202 16:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.585 16:16:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.585 00:07:37.585 real 0m1.292s 00:07:37.585 user 0m1.206s 00:07:37.585 sys 0m0.099s 00:07:37.585 16:16:04 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.585 16:16:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:37.585 ************************************ 00:07:37.585 END TEST accel_decomp 00:07:37.585 ************************************ 00:07:37.585 16:16:04 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.585 16:16:04 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:37.585 16:16:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.585 16:16:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.585 ************************************ 00:07:37.585 START TEST accel_decomp_full 00:07:37.585 ************************************ 00:07:37.585 16:16:04 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:37.585 [2024-06-07 16:16:04.130806] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:37.585 [2024-06-07 16:16:04.130871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902832 ] 00:07:37.585 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.585 [2024-06-07 16:16:04.194178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.585 [2024-06-07 16:16:04.262743] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.585 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.586 16:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.971 16:16:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.971 00:07:38.971 real 0m1.300s 00:07:38.971 user 0m1.205s 00:07:38.971 sys 0m0.107s 00:07:38.971 16:16:05 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.971 16:16:05 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:38.971 ************************************ 00:07:38.971 END TEST accel_decomp_full 00:07:38.971 ************************************ 00:07:38.971 16:16:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.971 16:16:05 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:38.971 16:16:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.971 16:16:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.971 ************************************ 00:07:38.971 START TEST accel_decomp_mcore 00:07:38.971 ************************************ 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:38.971 [2024-06-07 16:16:05.506193] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:38.971 [2024-06-07 16:16:05.506279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903177 ] 00:07:38.971 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.971 [2024-06-07 16:16:05.568780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.971 [2024-06-07 16:16:05.640481] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.971 [2024-06-07 16:16:05.640759] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.971 [2024-06-07 16:16:05.640919] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.971 [2024-06-07 16:16:05.640919] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.971 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.972 16:16:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.355 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.356 00:07:40.356 real 0m1.302s 00:07:40.356 user 0m4.441s 00:07:40.356 sys 0m0.108s 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:40.356 16:16:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:40.356 ************************************ 00:07:40.356 END TEST accel_decomp_mcore 00:07:40.356 ************************************ 00:07:40.356 16:16:06 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.356 16:16:06 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:40.356 16:16:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:40.356 16:16:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.356 ************************************ 00:07:40.356 START TEST accel_decomp_full_mcore 00:07:40.356 ************************************ 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:40.356 16:16:06 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:40.356 [2024-06-07 16:16:06.883450] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:40.356 [2024-06-07 16:16:06.883519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903527 ] 00:07:40.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.356 [2024-06-07 16:16:06.945701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.356 [2024-06-07 16:16:07.019482] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.356 [2024-06-07 16:16:07.019602] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.356 [2024-06-07 16:16:07.019759] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.356 [2024-06-07 16:16:07.019759] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.356 16:16:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.740 00:07:41.740 real 0m1.317s 00:07:41.740 user 0m4.496s 00:07:41.740 sys 0m0.106s 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:41.740 16:16:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 END TEST accel_decomp_full_mcore 00:07:41.740 ************************************ 00:07:41.740 16:16:08 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.740 16:16:08 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:41.740 16:16:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:41.740 16:16:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 START TEST accel_decomp_mthread 00:07:41.740 ************************************ 00:07:41.740 16:16:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.740 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:41.740 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:41.740 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:41.741 [2024-06-07 16:16:08.275790] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:41.741 [2024-06-07 16:16:08.275877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903879 ] 00:07:41.741 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.741 [2024-06-07 16:16:08.342982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.741 [2024-06-07 16:16:08.407741] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.741 16:16:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.127 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.128 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.128 16:16:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.128 00:07:43.128 real 0m1.299s 00:07:43.128 user 0m1.207s 00:07:43.128 sys 0m0.105s 00:07:43.128 16:16:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:43.128 16:16:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:43.128 ************************************ 00:07:43.128 END TEST accel_decomp_mthread 00:07:43.128 ************************************ 00:07:43.128 16:16:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.128 16:16:09 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:43.128 16:16:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:43.128 16:16:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.128 ************************************ 00:07:43.128 START TEST accel_decomp_full_mthread 00:07:43.128 ************************************ 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:43.128 [2024-06-07 16:16:09.647186] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:43.128 [2024-06-07 16:16:09.647279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904106 ] 00:07:43.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.128 [2024-06-07 16:16:09.709093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.128 [2024-06-07 16:16:09.778111] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.128 16:16:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.543 00:07:44.543 real 0m1.320s 00:07:44.543 user 0m1.237s 00:07:44.543 sys 0m0.097s 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.543 16:16:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:44.543 ************************************ 00:07:44.543 END TEST accel_decomp_full_mthread 00:07:44.543 ************************************ 00:07:44.543 16:16:10 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:44.543 16:16:10 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:44.543 16:16:10 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:44.543 16:16:10 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:44.543 16:16:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:44.543 16:16:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.543 16:16:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.543 16:16:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.543 16:16:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.543 16:16:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.543 16:16:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.543 16:16:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:44.543 16:16:10 accel -- accel/accel.sh@41 -- # jq -r . 00:07:44.543 ************************************ 00:07:44.543 START TEST accel_dif_functional_tests 00:07:44.543 ************************************ 00:07:44.543 16:16:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:44.543 [2024-06-07 16:16:11.060209] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:44.543 [2024-06-07 16:16:11.060257] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904311 ] 00:07:44.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.543 [2024-06-07 16:16:11.121020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.543 [2024-06-07 16:16:11.194610] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.543 [2024-06-07 16:16:11.194726] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.543 [2024-06-07 16:16:11.194729] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.543 00:07:44.543 00:07:44.543 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.543 http://cunit.sourceforge.net/ 00:07:44.543 00:07:44.543 00:07:44.543 Suite: accel_dif 00:07:44.543 Test: verify: DIF generated, GUARD check ...passed 00:07:44.543 Test: verify: DIF generated, APPTAG check ...passed 00:07:44.543 Test: verify: DIF generated, REFTAG check ...passed 00:07:44.544 Test: verify: DIF not generated, GUARD check ...[2024-06-07 16:16:11.250537] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:44.544 passed 00:07:44.544 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 16:16:11.250582] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:44.544 passed 00:07:44.544 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 16:16:11.250603] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:44.544 passed 00:07:44.544 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:44.544 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 16:16:11.250652] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:44.544 passed 00:07:44.544 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:44.544 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:44.544 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:44.544 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 16:16:11.250760] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:44.544 passed 00:07:44.544 Test: verify copy: DIF generated, GUARD check ...passed 00:07:44.544 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:44.544 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:44.544 Test: verify copy: DIF not generated, GUARD check ...[2024-06-07 16:16:11.250883] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:44.544 passed 00:07:44.544 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-07 16:16:11.250904] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:44.544 passed 00:07:44.544 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-07 16:16:11.250932] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:44.544 passed 00:07:44.544 Test: generate copy: DIF generated, GUARD check ...passed 00:07:44.544 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:44.544 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:44.544 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:44.544 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:44.544 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:44.544 Test: generate copy: iovecs-len validate ...[2024-06-07 16:16:11.251113] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:44.544 passed 00:07:44.544 Test: generate copy: buffer alignment validate ...passed 00:07:44.544 00:07:44.544 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.544 suites 1 1 n/a 0 0 00:07:44.544 tests 26 26 26 0 0 00:07:44.544 asserts 115 115 115 0 n/a 00:07:44.544 00:07:44.544 Elapsed time = 0.002 seconds 00:07:44.544 00:07:44.544 real 0m0.352s 00:07:44.544 user 0m0.488s 00:07:44.544 sys 0m0.128s 00:07:44.544 16:16:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.544 16:16:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:44.544 ************************************ 00:07:44.544 END TEST accel_dif_functional_tests 00:07:44.544 ************************************ 00:07:44.806 00:07:44.806 real 0m29.975s 00:07:44.806 user 0m33.756s 00:07:44.806 sys 0m3.975s 00:07:44.806 16:16:11 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.806 16:16:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.806 ************************************ 00:07:44.806 END TEST accel 00:07:44.806 ************************************ 00:07:44.806 16:16:11 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:44.806 16:16:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:44.806 16:16:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:44.806 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:07:44.806 ************************************ 00:07:44.806 START TEST accel_rpc 00:07:44.806 ************************************ 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:44.806 * Looking for test storage... 00:07:44.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:44.806 16:16:11 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:44.806 16:16:11 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2904655 00:07:44.806 16:16:11 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2904655 00:07:44.806 16:16:11 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 2904655 ']' 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:44.806 16:16:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.806 [2024-06-07 16:16:11.646785] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:44.806 [2024-06-07 16:16:11.646853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904655 ] 00:07:45.067 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.068 [2024-06-07 16:16:11.712643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.068 [2024-06-07 16:16:11.786285] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.639 16:16:12 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:45.639 16:16:12 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:45.639 16:16:12 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:45.639 16:16:12 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:45.639 16:16:12 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:45.639 16:16:12 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:45.639 16:16:12 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:45.639 16:16:12 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:45.639 16:16:12 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:45.639 16:16:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.639 ************************************ 00:07:45.639 START TEST accel_assign_opcode 00:07:45.639 ************************************ 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.639 [2024-06-07 16:16:12.468290] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.639 [2024-06-07 16:16:12.476298] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.639 16:16:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:45.640 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.640 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.900 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.900 software 00:07:45.900 00:07:45.900 real 0m0.208s 00:07:45.900 user 0m0.047s 00:07:45.901 sys 0m0.010s 00:07:45.901 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:45.901 16:16:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.901 ************************************ 00:07:45.901 END TEST accel_assign_opcode 00:07:45.901 ************************************ 00:07:45.901 16:16:12 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2904655 00:07:45.901 16:16:12 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 2904655 ']' 00:07:45.901 16:16:12 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 2904655 00:07:45.901 16:16:12 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:45.901 16:16:12 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:45.901 16:16:12 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2904655 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2904655' 00:07:46.162 killing process with pid 2904655 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@968 -- # kill 2904655 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@973 -- # wait 2904655 00:07:46.162 00:07:46.162 real 0m1.486s 00:07:46.162 user 0m1.577s 00:07:46.162 sys 0m0.409s 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:46.162 16:16:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.162 ************************************ 00:07:46.162 END TEST accel_rpc 00:07:46.162 ************************************ 00:07:46.162 16:16:13 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.162 16:16:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:46.162 16:16:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:46.162 16:16:13 -- common/autotest_common.sh@10 -- # set +x 00:07:46.423 ************************************ 00:07:46.423 START TEST app_cmdline 00:07:46.423 ************************************ 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.423 * Looking for test storage... 00:07:46.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.423 16:16:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:46.423 16:16:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2904998 00:07:46.423 16:16:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2904998 00:07:46.423 16:16:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 2904998 ']' 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:46.423 16:16:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.423 [2024-06-07 16:16:13.206434] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:07:46.423 [2024-06-07 16:16:13.206500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904998 ] 00:07:46.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.423 [2024-06-07 16:16:13.267574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.684 [2024-06-07 16:16:13.332104] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.254 16:16:13 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:47.254 16:16:13 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:47.254 16:16:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:47.254 { 00:07:47.254 "version": "SPDK v24.09-pre git sha1 5a57befde", 00:07:47.254 "fields": { 00:07:47.254 "major": 24, 00:07:47.254 "minor": 9, 00:07:47.254 "patch": 0, 00:07:47.254 "suffix": "-pre", 00:07:47.254 "commit": "5a57befde" 00:07:47.254 } 00:07:47.254 } 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:47.254 16:16:14 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:47.254 16:16:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.254 16:16:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:47.254 16:16:14 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:47.515 16:16:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:47.515 16:16:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:47.515 16:16:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.515 request: 00:07:47.515 { 00:07:47.515 "method": "env_dpdk_get_mem_stats", 00:07:47.515 "req_id": 1 00:07:47.515 } 00:07:47.515 Got JSON-RPC error response 00:07:47.515 response: 00:07:47.515 { 00:07:47.515 "code": -32601, 00:07:47.515 "message": "Method not found" 00:07:47.515 } 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:47.515 16:16:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2904998 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 2904998 ']' 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 2904998 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:47.515 16:16:14 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2904998 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2904998' 00:07:47.775 killing process with pid 2904998 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@968 -- # kill 2904998 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@973 -- # wait 2904998 00:07:47.775 00:07:47.775 real 0m1.534s 00:07:47.775 user 0m1.821s 00:07:47.775 sys 0m0.400s 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.775 16:16:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.775 ************************************ 00:07:47.775 END TEST app_cmdline 00:07:47.775 ************************************ 00:07:47.775 16:16:14 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.775 16:16:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:47.775 16:16:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.775 16:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:48.037 ************************************ 00:07:48.037 START TEST version 00:07:48.037 ************************************ 00:07:48.037 16:16:14 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:48.037 * Looking for test storage... 00:07:48.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:48.037 16:16:14 version -- app/version.sh@17 -- # get_header_version major 00:07:48.037 16:16:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # cut -f2 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:48.037 16:16:14 version -- app/version.sh@17 -- # major=24 00:07:48.037 16:16:14 version -- app/version.sh@18 -- # get_header_version minor 00:07:48.037 16:16:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # cut -f2 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:48.037 16:16:14 version -- app/version.sh@18 -- # minor=9 00:07:48.037 16:16:14 version -- app/version.sh@19 -- # get_header_version patch 00:07:48.037 16:16:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # cut -f2 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:48.037 16:16:14 version -- app/version.sh@19 -- # patch=0 00:07:48.037 16:16:14 version -- app/version.sh@20 -- # get_header_version suffix 00:07:48.037 16:16:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # cut -f2 00:07:48.037 16:16:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:48.037 16:16:14 version -- app/version.sh@20 -- # suffix=-pre 00:07:48.037 16:16:14 version -- app/version.sh@22 -- # version=24.9 00:07:48.037 16:16:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:48.037 16:16:14 version -- app/version.sh@28 -- # version=24.9rc0 00:07:48.037 16:16:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.037 16:16:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:48.037 16:16:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:48.037 16:16:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:48.037 00:07:48.037 real 0m0.177s 00:07:48.037 user 0m0.091s 00:07:48.037 sys 0m0.127s 00:07:48.037 16:16:14 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.037 16:16:14 version -- common/autotest_common.sh@10 -- # set +x 00:07:48.037 ************************************ 00:07:48.037 END TEST version 00:07:48.037 ************************************ 00:07:48.037 16:16:14 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:48.037 16:16:14 -- spdk/autotest.sh@198 -- # uname -s 00:07:48.037 16:16:14 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:48.037 16:16:14 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:48.037 16:16:14 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:48.037 16:16:14 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:48.037 16:16:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:48.037 16:16:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:48.037 16:16:14 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:48.037 16:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:48.299 16:16:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:48.299 16:16:14 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:48.299 16:16:14 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:48.299 16:16:14 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:48.299 16:16:14 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:48.299 16:16:14 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:48.299 16:16:14 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:48.299 16:16:14 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:48.299 16:16:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.299 16:16:14 -- common/autotest_common.sh@10 -- # set +x 00:07:48.299 ************************************ 00:07:48.299 START TEST nvmf_tcp 00:07:48.299 ************************************ 00:07:48.299 16:16:14 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:48.299 * Looking for test storage... 00:07:48.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.299 16:16:15 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.299 16:16:15 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.299 16:16:15 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.299 16:16:15 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.300 16:16:15 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.300 16:16:15 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.300 16:16:15 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.300 16:16:15 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:48.300 16:16:15 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:48.300 16:16:15 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:48.300 16:16:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:48.300 16:16:15 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:48.300 16:16:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:48.300 16:16:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.300 16:16:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.300 ************************************ 00:07:48.300 START TEST nvmf_example 00:07:48.300 ************************************ 00:07:48.300 16:16:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:48.561 * Looking for test storage... 00:07:48.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.561 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.562 16:16:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.702 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:56.703 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:56.703 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:56.703 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:56.703 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:07:56.703 00:07:56.703 --- 10.0.0.2 ping statistics --- 00:07:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.703 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:07:56.703 00:07:56.703 --- 10.0.0.1 ping statistics --- 00:07:56.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.703 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2909159 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2909159 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 2909159 ']' 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:56.703 16:16:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:56.703 16:16:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:56.703 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.933 Initializing NVMe Controllers 00:08:08.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.933 Initialization complete. Launching workers. 00:08:08.933 ======================================================== 00:08:08.933 Latency(us) 00:08:08.933 Device Information : IOPS MiB/s Average min max 00:08:08.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18617.70 72.73 3437.87 834.72 15347.23 00:08:08.933 ======================================================== 00:08:08.933 Total : 18617.70 72.73 3437.87 834.72 15347.23 00:08:08.933 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.933 rmmod nvme_tcp 00:08:08.933 rmmod nvme_fabrics 00:08:08.933 rmmod nvme_keyring 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2909159 ']' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2909159 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 2909159 ']' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 2909159 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2909159 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2909159' 00:08:08.933 killing process with pid 2909159 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 2909159 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 2909159 00:08:08.933 nvmf threads initialize successfully 00:08:08.933 bdev subsystem init successfully 00:08:08.933 created a nvmf target service 00:08:08.933 create targets's poll groups done 00:08:08.933 all subsystems of target started 00:08:08.933 nvmf target is running 00:08:08.933 all subsystems of target stopped 00:08:08.933 destroy targets's poll groups done 00:08:08.933 destroyed the nvmf target service 00:08:08.933 bdev subsystem finish successfully 00:08:08.933 nvmf threads destroy successfully 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.933 16:16:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.194 16:16:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.194 16:16:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:09.194 16:16:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:09.194 16:16:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.456 00:08:09.456 real 0m20.946s 00:08:09.456 user 0m46.867s 00:08:09.456 sys 0m6.330s 00:08:09.456 16:16:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:09.456 16:16:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:09.456 ************************************ 00:08:09.456 END TEST nvmf_example 00:08:09.456 ************************************ 00:08:09.456 16:16:36 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:09.456 16:16:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:09.456 16:16:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.456 16:16:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.456 ************************************ 00:08:09.456 START TEST nvmf_filesystem 00:08:09.456 ************************************ 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:09.456 * Looking for test storage... 00:08:09.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:09.456 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:09.457 #define SPDK_CONFIG_H 00:08:09.457 #define SPDK_CONFIG_APPS 1 00:08:09.457 #define SPDK_CONFIG_ARCH native 00:08:09.457 #undef SPDK_CONFIG_ASAN 00:08:09.457 #undef SPDK_CONFIG_AVAHI 00:08:09.457 #undef SPDK_CONFIG_CET 00:08:09.457 #define SPDK_CONFIG_COVERAGE 1 00:08:09.457 #define SPDK_CONFIG_CROSS_PREFIX 00:08:09.457 #undef SPDK_CONFIG_CRYPTO 00:08:09.457 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:09.457 #undef SPDK_CONFIG_CUSTOMOCF 00:08:09.457 #undef SPDK_CONFIG_DAOS 00:08:09.457 #define SPDK_CONFIG_DAOS_DIR 00:08:09.457 #define SPDK_CONFIG_DEBUG 1 00:08:09.457 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:09.457 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:09.457 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:09.457 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:09.457 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:09.457 #undef SPDK_CONFIG_DPDK_UADK 00:08:09.457 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:09.457 #define SPDK_CONFIG_EXAMPLES 1 00:08:09.457 #undef SPDK_CONFIG_FC 00:08:09.457 #define SPDK_CONFIG_FC_PATH 00:08:09.457 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:09.457 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:09.457 #undef SPDK_CONFIG_FUSE 00:08:09.457 #undef SPDK_CONFIG_FUZZER 00:08:09.457 #define SPDK_CONFIG_FUZZER_LIB 00:08:09.457 #undef SPDK_CONFIG_GOLANG 00:08:09.457 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:09.457 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:09.457 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:09.457 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:09.457 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:09.457 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:09.457 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:09.457 #define SPDK_CONFIG_IDXD 1 00:08:09.457 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:09.457 #undef SPDK_CONFIG_IPSEC_MB 00:08:09.457 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:09.457 #define SPDK_CONFIG_ISAL 1 00:08:09.457 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:09.457 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:09.457 #define SPDK_CONFIG_LIBDIR 00:08:09.457 #undef SPDK_CONFIG_LTO 00:08:09.457 #define SPDK_CONFIG_MAX_LCORES 00:08:09.457 #define SPDK_CONFIG_NVME_CUSE 1 00:08:09.457 #undef SPDK_CONFIG_OCF 00:08:09.457 #define SPDK_CONFIG_OCF_PATH 00:08:09.457 #define SPDK_CONFIG_OPENSSL_PATH 00:08:09.457 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:09.457 #define SPDK_CONFIG_PGO_DIR 00:08:09.457 #undef SPDK_CONFIG_PGO_USE 00:08:09.457 #define SPDK_CONFIG_PREFIX /usr/local 00:08:09.457 #undef SPDK_CONFIG_RAID5F 00:08:09.457 #undef SPDK_CONFIG_RBD 00:08:09.457 #define SPDK_CONFIG_RDMA 1 00:08:09.457 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:09.457 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:09.457 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:09.457 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:09.457 #define SPDK_CONFIG_SHARED 1 00:08:09.457 #undef SPDK_CONFIG_SMA 00:08:09.457 #define SPDK_CONFIG_TESTS 1 00:08:09.457 #undef SPDK_CONFIG_TSAN 00:08:09.457 #define SPDK_CONFIG_UBLK 1 00:08:09.457 #define SPDK_CONFIG_UBSAN 1 00:08:09.457 #undef SPDK_CONFIG_UNIT_TESTS 00:08:09.457 #undef SPDK_CONFIG_URING 00:08:09.457 #define SPDK_CONFIG_URING_PATH 00:08:09.457 #undef SPDK_CONFIG_URING_ZNS 00:08:09.457 #undef SPDK_CONFIG_USDT 00:08:09.457 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:09.457 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:09.457 #define SPDK_CONFIG_VFIO_USER 1 00:08:09.457 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:09.457 #define SPDK_CONFIG_VHOST 1 00:08:09.457 #define SPDK_CONFIG_VIRTIO 1 00:08:09.457 #undef SPDK_CONFIG_VTUNE 00:08:09.457 #define SPDK_CONFIG_VTUNE_DIR 00:08:09.457 #define SPDK_CONFIG_WERROR 1 00:08:09.457 #define SPDK_CONFIG_WPDK_DIR 00:08:09.457 #undef SPDK_CONFIG_XNVME 00:08:09.457 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:09.457 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:09.721 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.722 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2911969 ]] 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2911969 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:09.723 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.GUCwBs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GUCwBs/tests/target /tmp/spdk.GUCwBs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956665856 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327763968 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118703591424 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10667388928 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864499200 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684634112 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=856064 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:09.724 * Looking for test storage... 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118703591424 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12881981440 00:08:09.724 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.725 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.726 16:16:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.322 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:16.323 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:16.323 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:16.323 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:16.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.323 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:08:16.583 00:08:16.583 --- 10.0.0.2 ping statistics --- 00:08:16.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.583 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:16.583 00:08:16.583 --- 10.0.0.1 ping statistics --- 00:08:16.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.583 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:16.583 16:16:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 ************************************ 00:08:16.844 START TEST nvmf_filesystem_no_in_capsule 00:08:16.844 ************************************ 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2915592 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2915592 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2915592 ']' 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:16.844 16:16:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.844 [2024-06-07 16:16:43.520613] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:08:16.844 [2024-06-07 16:16:43.520661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.844 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.844 [2024-06-07 16:16:43.586712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.844 [2024-06-07 16:16:43.656002] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.844 [2024-06-07 16:16:43.656037] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.844 [2024-06-07 16:16:43.656044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.844 [2024-06-07 16:16:43.656050] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.844 [2024-06-07 16:16:43.656056] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.844 [2024-06-07 16:16:43.656194] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.844 [2024-06-07 16:16:43.656328] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.844 [2024-06-07 16:16:43.656488] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.844 [2024-06-07 16:16:43.656489] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 [2024-06-07 16:16:44.337009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 [2024-06-07 16:16:44.465754] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:08:17.785 { 00:08:17.785 "name": "Malloc1", 00:08:17.785 "aliases": [ 00:08:17.785 "7a009561-cf07-46d8-8e24-bd019da94e7a" 00:08:17.785 ], 00:08:17.785 "product_name": "Malloc disk", 00:08:17.785 "block_size": 512, 00:08:17.785 "num_blocks": 1048576, 00:08:17.785 "uuid": "7a009561-cf07-46d8-8e24-bd019da94e7a", 00:08:17.785 "assigned_rate_limits": { 00:08:17.785 "rw_ios_per_sec": 0, 00:08:17.785 "rw_mbytes_per_sec": 0, 00:08:17.785 "r_mbytes_per_sec": 0, 00:08:17.785 "w_mbytes_per_sec": 0 00:08:17.785 }, 00:08:17.785 "claimed": true, 00:08:17.785 "claim_type": "exclusive_write", 00:08:17.785 "zoned": false, 00:08:17.785 "supported_io_types": { 00:08:17.785 "read": true, 00:08:17.785 "write": true, 00:08:17.785 "unmap": true, 00:08:17.785 "write_zeroes": true, 00:08:17.785 "flush": true, 00:08:17.785 "reset": true, 00:08:17.785 "compare": false, 00:08:17.785 "compare_and_write": false, 00:08:17.785 "abort": true, 00:08:17.785 "nvme_admin": false, 00:08:17.785 "nvme_io": false 00:08:17.785 }, 00:08:17.785 "memory_domains": [ 00:08:17.785 { 00:08:17.785 "dma_device_id": "system", 00:08:17.785 "dma_device_type": 1 00:08:17.785 }, 00:08:17.785 { 00:08:17.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.785 "dma_device_type": 2 00:08:17.785 } 00:08:17.785 ], 00:08:17.785 "driver_specific": {} 00:08:17.785 } 00:08:17.785 ]' 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:17.785 16:16:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.697 16:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.697 16:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:08:19.697 16:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.697 16:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:19.697 16:16:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.639 16:16:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:22.209 16:16:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.592 ************************************ 00:08:23.592 START TEST filesystem_ext4 00:08:23.592 ************************************ 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:08:23.592 16:16:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:23.592 mke2fs 1.46.5 (30-Dec-2021) 00:08:23.592 Discarding device blocks: 0/522240 done 00:08:23.592 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:23.592 Filesystem UUID: d2882ac7-ad66-4ebc-8bfa-c5a9d5881590 00:08:23.592 Superblock backups stored on blocks: 00:08:23.592 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:23.592 00:08:23.592 Allocating group tables: 0/64 done 00:08:23.592 Writing inode tables: 0/64 done 00:08:23.592 Creating journal (8192 blocks): done 00:08:24.532 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:08:24.532 00:08:24.532 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:08:24.532 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2915592 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.792 00:08:24.792 real 0m1.473s 00:08:24.792 user 0m0.031s 00:08:24.792 sys 0m0.065s 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:24.792 ************************************ 00:08:24.792 END TEST filesystem_ext4 00:08:24.792 ************************************ 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:24.792 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.792 ************************************ 00:08:24.792 START TEST filesystem_btrfs 00:08:24.793 ************************************ 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:08:24.793 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.052 btrfs-progs v6.6.2 00:08:25.053 See https://btrfs.readthedocs.io for more information. 00:08:25.053 00:08:25.053 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.053 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.053 this does not affect your deployments: 00:08:25.053 - DUP for metadata (-m dup) 00:08:25.053 - enabled no-holes (-O no-holes) 00:08:25.053 - enabled free-space-tree (-R free-space-tree) 00:08:25.053 00:08:25.053 Label: (null) 00:08:25.053 UUID: b8edc4e8-bfd2-4ff7-99f3-be4d3ad2cefc 00:08:25.053 Node size: 16384 00:08:25.053 Sector size: 4096 00:08:25.053 Filesystem size: 510.00MiB 00:08:25.053 Block group profiles: 00:08:25.053 Data: single 8.00MiB 00:08:25.053 Metadata: DUP 32.00MiB 00:08:25.053 System: DUP 8.00MiB 00:08:25.053 SSD detected: yes 00:08:25.053 Zoned device: no 00:08:25.053 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.053 Runtime features: free-space-tree 00:08:25.053 Checksum: crc32c 00:08:25.053 Number of devices: 1 00:08:25.053 Devices: 00:08:25.053 ID SIZE PATH 00:08:25.053 1 510.00MiB /dev/nvme0n1p1 00:08:25.053 00:08:25.053 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:08:25.053 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.313 16:16:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.313 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:25.313 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.313 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2915592 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.314 00:08:25.314 real 0m0.456s 00:08:25.314 user 0m0.019s 00:08:25.314 sys 0m0.134s 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:25.314 ************************************ 00:08:25.314 END TEST filesystem_btrfs 00:08:25.314 ************************************ 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.314 ************************************ 00:08:25.314 START TEST filesystem_xfs 00:08:25.314 ************************************ 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:08:25.314 16:16:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:25.575 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:25.575 = sectsz=512 attr=2, projid32bit=1 00:08:25.575 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:25.575 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:25.575 data = bsize=4096 blocks=130560, imaxpct=25 00:08:25.575 = sunit=0 swidth=0 blks 00:08:25.575 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:25.575 log =internal log bsize=4096 blocks=16384, version=2 00:08:25.575 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:25.575 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:26.516 Discarding blocks...Done. 00:08:26.516 16:16:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:26.516 16:16:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2915592 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.059 00:08:29.059 real 0m3.454s 00:08:29.059 user 0m0.029s 00:08:29.059 sys 0m0.071s 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:29.059 ************************************ 00:08:29.059 END TEST filesystem_xfs 00:08:29.059 ************************************ 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:29.059 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2915592 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2915592 ']' 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2915592 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:29.060 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2915592 00:08:29.321 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:29.321 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:29.321 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2915592' 00:08:29.321 killing process with pid 2915592 00:08:29.321 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 2915592 00:08:29.321 16:16:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 2915592 00:08:29.321 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:29.321 00:08:29.321 real 0m12.716s 00:08:29.321 user 0m50.092s 00:08:29.321 sys 0m1.214s 00:08:29.321 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.321 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.321 ************************************ 00:08:29.321 END TEST nvmf_filesystem_no_in_capsule 00:08:29.321 ************************************ 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.581 ************************************ 00:08:29.581 START TEST nvmf_filesystem_in_capsule 00:08:29.581 ************************************ 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2918486 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2918486 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2918486 ']' 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:29.581 16:16:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.581 [2024-06-07 16:16:56.312672] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:08:29.581 [2024-06-07 16:16:56.312722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.581 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.581 [2024-06-07 16:16:56.389092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.869 [2024-06-07 16:16:56.465823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.869 [2024-06-07 16:16:56.465862] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.869 [2024-06-07 16:16:56.465869] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.869 [2024-06-07 16:16:56.465876] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.869 [2024-06-07 16:16:56.465882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.869 [2024-06-07 16:16:56.466023] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.869 [2024-06-07 16:16:56.466147] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.869 [2024-06-07 16:16:56.466305] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.869 [2024-06-07 16:16:56.466307] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 [2024-06-07 16:16:57.139960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.440 [2024-06-07 16:16:57.267525] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.440 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:08:30.701 { 00:08:30.701 "name": "Malloc1", 00:08:30.701 "aliases": [ 00:08:30.701 "93f03d1c-9d42-4bb4-8428-4c1225c03d18" 00:08:30.701 ], 00:08:30.701 "product_name": "Malloc disk", 00:08:30.701 "block_size": 512, 00:08:30.701 "num_blocks": 1048576, 00:08:30.701 "uuid": "93f03d1c-9d42-4bb4-8428-4c1225c03d18", 00:08:30.701 "assigned_rate_limits": { 00:08:30.701 "rw_ios_per_sec": 0, 00:08:30.701 "rw_mbytes_per_sec": 0, 00:08:30.701 "r_mbytes_per_sec": 0, 00:08:30.701 "w_mbytes_per_sec": 0 00:08:30.701 }, 00:08:30.701 "claimed": true, 00:08:30.701 "claim_type": "exclusive_write", 00:08:30.701 "zoned": false, 00:08:30.701 "supported_io_types": { 00:08:30.701 "read": true, 00:08:30.701 "write": true, 00:08:30.701 "unmap": true, 00:08:30.701 "write_zeroes": true, 00:08:30.701 "flush": true, 00:08:30.701 "reset": true, 00:08:30.701 "compare": false, 00:08:30.701 "compare_and_write": false, 00:08:30.701 "abort": true, 00:08:30.701 "nvme_admin": false, 00:08:30.701 "nvme_io": false 00:08:30.701 }, 00:08:30.701 "memory_domains": [ 00:08:30.701 { 00:08:30.701 "dma_device_id": "system", 00:08:30.701 "dma_device_type": 1 00:08:30.701 }, 00:08:30.701 { 00:08:30.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.701 "dma_device_type": 2 00:08:30.701 } 00:08:30.701 ], 00:08:30.701 "driver_specific": {} 00:08:30.701 } 00:08:30.701 ]' 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:30.701 16:16:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.084 16:16:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.084 16:16:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:08:32.084 16:16:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.084 16:16:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:32.084 16:16:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:34.630 16:17:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:34.630 16:17:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:34.891 16:17:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:35.833 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:35.833 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:35.833 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:35.833 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.833 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.833 ************************************ 00:08:35.833 START TEST filesystem_in_capsule_ext4 00:08:35.833 ************************************ 00:08:35.833 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:08:36.095 16:17:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:36.095 mke2fs 1.46.5 (30-Dec-2021) 00:08:36.095 Discarding device blocks: 0/522240 done 00:08:36.095 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:36.095 Filesystem UUID: f57a8a28-26bd-4cbd-bfce-6bc6d549e786 00:08:36.095 Superblock backups stored on blocks: 00:08:36.095 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:36.095 00:08:36.095 Allocating group tables: 0/64 done 00:08:36.095 Writing inode tables: 0/64 done 00:08:39.394 Creating journal (8192 blocks): done 00:08:39.394 Writing superblocks and filesystem accounting information: 0/64 done 00:08:39.394 00:08:39.394 16:17:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:08:39.394 16:17:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.655 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2918486 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.916 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.916 00:08:39.916 real 0m3.928s 00:08:39.916 user 0m0.027s 00:08:39.916 sys 0m0.073s 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:39.917 ************************************ 00:08:39.917 END TEST filesystem_in_capsule_ext4 00:08:39.917 ************************************ 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:39.917 ************************************ 00:08:39.917 START TEST filesystem_in_capsule_btrfs 00:08:39.917 ************************************ 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:08:39.917 16:17:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:40.177 btrfs-progs v6.6.2 00:08:40.177 See https://btrfs.readthedocs.io for more information. 00:08:40.177 00:08:40.177 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:40.177 NOTE: several default settings have changed in version 5.15, please make sure 00:08:40.177 this does not affect your deployments: 00:08:40.177 - DUP for metadata (-m dup) 00:08:40.177 - enabled no-holes (-O no-holes) 00:08:40.177 - enabled free-space-tree (-R free-space-tree) 00:08:40.177 00:08:40.177 Label: (null) 00:08:40.177 UUID: 12745d1f-0197-4531-a559-b8e3bbbcdd96 00:08:40.177 Node size: 16384 00:08:40.177 Sector size: 4096 00:08:40.177 Filesystem size: 510.00MiB 00:08:40.177 Block group profiles: 00:08:40.177 Data: single 8.00MiB 00:08:40.177 Metadata: DUP 32.00MiB 00:08:40.177 System: DUP 8.00MiB 00:08:40.177 SSD detected: yes 00:08:40.177 Zoned device: no 00:08:40.177 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:40.177 Runtime features: free-space-tree 00:08:40.177 Checksum: crc32c 00:08:40.177 Number of devices: 1 00:08:40.177 Devices: 00:08:40.177 ID SIZE PATH 00:08:40.177 1 510.00MiB /dev/nvme0n1p1 00:08:40.177 00:08:40.177 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:08:40.177 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2918486 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.749 00:08:40.749 real 0m0.730s 00:08:40.749 user 0m0.026s 00:08:40.749 sys 0m0.133s 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:40.749 ************************************ 00:08:40.749 END TEST filesystem_in_capsule_btrfs 00:08:40.749 ************************************ 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.749 ************************************ 00:08:40.749 START TEST filesystem_in_capsule_xfs 00:08:40.749 ************************************ 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:08:40.749 16:17:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:40.749 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:40.749 = sectsz=512 attr=2, projid32bit=1 00:08:40.749 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:40.749 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:40.749 data = bsize=4096 blocks=130560, imaxpct=25 00:08:40.749 = sunit=0 swidth=0 blks 00:08:40.749 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:40.749 log =internal log bsize=4096 blocks=16384, version=2 00:08:40.749 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:40.749 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:42.136 Discarding blocks...Done. 00:08:42.136 16:17:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:42.136 16:17:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2918486 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.712 00:08:44.712 real 0m3.802s 00:08:44.712 user 0m0.026s 00:08:44.712 sys 0m0.078s 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:44.712 ************************************ 00:08:44.712 END TEST filesystem_in_capsule_xfs 00:08:44.712 ************************************ 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:44.712 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:44.973 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2918486 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2918486 ']' 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2918486 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2918486 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2918486' 00:08:45.234 killing process with pid 2918486 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 2918486 00:08:45.234 16:17:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 2918486 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:45.495 00:08:45.495 real 0m15.952s 00:08:45.495 user 1m2.887s 00:08:45.495 sys 0m1.309s 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:45.495 ************************************ 00:08:45.495 END TEST nvmf_filesystem_in_capsule 00:08:45.495 ************************************ 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.495 rmmod nvme_tcp 00:08:45.495 rmmod nvme_fabrics 00:08:45.495 rmmod nvme_keyring 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.495 16:17:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.040 16:17:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.040 00:08:48.041 real 0m38.226s 00:08:48.041 user 1m55.172s 00:08:48.041 sys 0m7.823s 00:08:48.041 16:17:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:48.041 16:17:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.041 ************************************ 00:08:48.041 END TEST nvmf_filesystem 00:08:48.041 ************************************ 00:08:48.041 16:17:14 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:48.041 16:17:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:48.041 16:17:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:48.041 16:17:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.041 ************************************ 00:08:48.041 START TEST nvmf_target_discovery 00:08:48.041 ************************************ 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:48.041 * Looking for test storage... 00:08:48.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.041 16:17:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.637 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:54.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:54.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:54.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:54.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:08:54.638 00:08:54.638 --- 10.0.0.2 ping statistics --- 00:08:54.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.638 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:08:54.638 00:08:54.638 --- 10.0.0.1 ping statistics --- 00:08:54.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.638 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2925757 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2925757 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 2925757 ']' 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:54.638 16:17:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.905 [2024-06-07 16:17:21.494141] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:08:54.905 [2024-06-07 16:17:21.494233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.905 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.905 [2024-06-07 16:17:21.567423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.905 [2024-06-07 16:17:21.642433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.905 [2024-06-07 16:17:21.642472] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.905 [2024-06-07 16:17:21.642480] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.905 [2024-06-07 16:17:21.642486] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.905 [2024-06-07 16:17:21.642492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.905 [2024-06-07 16:17:21.642675] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.905 [2024-06-07 16:17:21.642790] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.905 [2024-06-07 16:17:21.642946] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.905 [2024-06-07 16:17:21.642947] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.475 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.476 [2024-06-07 16:17:22.305904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.476 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 Null1 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 [2024-06-07 16:17:22.362195] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 Null2 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.738 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 Null3 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 Null4 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:55.739 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:56.001 00:08:56.001 Discovery Log Number of Records 6, Generation counter 6 00:08:56.001 =====Discovery Log Entry 0====== 00:08:56.001 trtype: tcp 00:08:56.001 adrfam: ipv4 00:08:56.001 subtype: current discovery subsystem 00:08:56.001 treq: not required 00:08:56.001 portid: 0 00:08:56.001 trsvcid: 4420 00:08:56.001 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:56.001 traddr: 10.0.0.2 00:08:56.001 eflags: explicit discovery connections, duplicate discovery information 00:08:56.001 sectype: none 00:08:56.001 =====Discovery Log Entry 1====== 00:08:56.001 trtype: tcp 00:08:56.001 adrfam: ipv4 00:08:56.001 subtype: nvme subsystem 00:08:56.001 treq: not required 00:08:56.001 portid: 0 00:08:56.001 trsvcid: 4420 00:08:56.001 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:56.001 traddr: 10.0.0.2 00:08:56.001 eflags: none 00:08:56.001 sectype: none 00:08:56.001 =====Discovery Log Entry 2====== 00:08:56.001 trtype: tcp 00:08:56.001 adrfam: ipv4 00:08:56.001 subtype: nvme subsystem 00:08:56.001 treq: not required 00:08:56.001 portid: 0 00:08:56.001 trsvcid: 4420 00:08:56.001 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:56.001 traddr: 10.0.0.2 00:08:56.001 eflags: none 00:08:56.001 sectype: none 00:08:56.001 =====Discovery Log Entry 3====== 00:08:56.001 trtype: tcp 00:08:56.001 adrfam: ipv4 00:08:56.001 subtype: nvme subsystem 00:08:56.001 treq: not required 00:08:56.001 portid: 0 00:08:56.001 trsvcid: 4420 00:08:56.001 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:56.001 traddr: 10.0.0.2 00:08:56.001 eflags: none 00:08:56.001 sectype: none 00:08:56.001 =====Discovery Log Entry 4====== 00:08:56.001 trtype: tcp 00:08:56.001 adrfam: ipv4 00:08:56.001 subtype: nvme subsystem 00:08:56.001 treq: not required 00:08:56.001 portid: 0 00:08:56.001 trsvcid: 4420 00:08:56.001 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:56.001 traddr: 10.0.0.2 00:08:56.001 eflags: none 00:08:56.001 sectype: none 00:08:56.001 =====Discovery Log Entry 5====== 00:08:56.001 trtype: tcp 00:08:56.001 adrfam: ipv4 00:08:56.001 subtype: discovery subsystem referral 00:08:56.001 treq: not required 00:08:56.001 portid: 0 00:08:56.001 trsvcid: 4430 00:08:56.001 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:56.001 traddr: 10.0.0.2 00:08:56.001 eflags: none 00:08:56.001 sectype: none 00:08:56.001 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:56.001 Perform nvmf subsystem discovery via RPC 00:08:56.001 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:56.001 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.001 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.001 [ 00:08:56.001 { 00:08:56.001 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:56.001 "subtype": "Discovery", 00:08:56.001 "listen_addresses": [ 00:08:56.001 { 00:08:56.001 "trtype": "TCP", 00:08:56.001 "adrfam": "IPv4", 00:08:56.001 "traddr": "10.0.0.2", 00:08:56.001 "trsvcid": "4420" 00:08:56.001 } 00:08:56.001 ], 00:08:56.001 "allow_any_host": true, 00:08:56.001 "hosts": [] 00:08:56.001 }, 00:08:56.001 { 00:08:56.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.001 "subtype": "NVMe", 00:08:56.001 "listen_addresses": [ 00:08:56.001 { 00:08:56.001 "trtype": "TCP", 00:08:56.001 "adrfam": "IPv4", 00:08:56.001 "traddr": "10.0.0.2", 00:08:56.001 "trsvcid": "4420" 00:08:56.001 } 00:08:56.001 ], 00:08:56.001 "allow_any_host": true, 00:08:56.001 "hosts": [], 00:08:56.001 "serial_number": "SPDK00000000000001", 00:08:56.001 "model_number": "SPDK bdev Controller", 00:08:56.001 "max_namespaces": 32, 00:08:56.001 "min_cntlid": 1, 00:08:56.001 "max_cntlid": 65519, 00:08:56.001 "namespaces": [ 00:08:56.001 { 00:08:56.001 "nsid": 1, 00:08:56.001 "bdev_name": "Null1", 00:08:56.001 "name": "Null1", 00:08:56.001 "nguid": "2C8FA3EF0C3445619BEBB4B141D512F7", 00:08:56.001 "uuid": "2c8fa3ef-0c34-4561-9beb-b4b141d512f7" 00:08:56.001 } 00:08:56.001 ] 00:08:56.001 }, 00:08:56.001 { 00:08:56.001 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:56.001 "subtype": "NVMe", 00:08:56.001 "listen_addresses": [ 00:08:56.001 { 00:08:56.001 "trtype": "TCP", 00:08:56.001 "adrfam": "IPv4", 00:08:56.001 "traddr": "10.0.0.2", 00:08:56.001 "trsvcid": "4420" 00:08:56.001 } 00:08:56.001 ], 00:08:56.001 "allow_any_host": true, 00:08:56.001 "hosts": [], 00:08:56.001 "serial_number": "SPDK00000000000002", 00:08:56.001 "model_number": "SPDK bdev Controller", 00:08:56.001 "max_namespaces": 32, 00:08:56.001 "min_cntlid": 1, 00:08:56.001 "max_cntlid": 65519, 00:08:56.001 "namespaces": [ 00:08:56.001 { 00:08:56.001 "nsid": 1, 00:08:56.001 "bdev_name": "Null2", 00:08:56.001 "name": "Null2", 00:08:56.001 "nguid": "C11E34C562384182896A94617099FC6D", 00:08:56.001 "uuid": "c11e34c5-6238-4182-896a-94617099fc6d" 00:08:56.001 } 00:08:56.001 ] 00:08:56.001 }, 00:08:56.001 { 00:08:56.001 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:56.001 "subtype": "NVMe", 00:08:56.001 "listen_addresses": [ 00:08:56.001 { 00:08:56.001 "trtype": "TCP", 00:08:56.001 "adrfam": "IPv4", 00:08:56.001 "traddr": "10.0.0.2", 00:08:56.001 "trsvcid": "4420" 00:08:56.001 } 00:08:56.001 ], 00:08:56.001 "allow_any_host": true, 00:08:56.001 "hosts": [], 00:08:56.001 "serial_number": "SPDK00000000000003", 00:08:56.001 "model_number": "SPDK bdev Controller", 00:08:56.001 "max_namespaces": 32, 00:08:56.001 "min_cntlid": 1, 00:08:56.001 "max_cntlid": 65519, 00:08:56.001 "namespaces": [ 00:08:56.001 { 00:08:56.001 "nsid": 1, 00:08:56.001 "bdev_name": "Null3", 00:08:56.001 "name": "Null3", 00:08:56.001 "nguid": "C1D1862DCB304A0DAA6A9F52B4C8597B", 00:08:56.002 "uuid": "c1d1862d-cb30-4a0d-aa6a-9f52b4c8597b" 00:08:56.002 } 00:08:56.002 ] 00:08:56.002 }, 00:08:56.002 { 00:08:56.002 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:56.002 "subtype": "NVMe", 00:08:56.002 "listen_addresses": [ 00:08:56.002 { 00:08:56.002 "trtype": "TCP", 00:08:56.002 "adrfam": "IPv4", 00:08:56.002 "traddr": "10.0.0.2", 00:08:56.002 "trsvcid": "4420" 00:08:56.002 } 00:08:56.002 ], 00:08:56.002 "allow_any_host": true, 00:08:56.002 "hosts": [], 00:08:56.002 "serial_number": "SPDK00000000000004", 00:08:56.002 "model_number": "SPDK bdev Controller", 00:08:56.002 "max_namespaces": 32, 00:08:56.002 "min_cntlid": 1, 00:08:56.002 "max_cntlid": 65519, 00:08:56.002 "namespaces": [ 00:08:56.002 { 00:08:56.002 "nsid": 1, 00:08:56.002 "bdev_name": "Null4", 00:08:56.002 "name": "Null4", 00:08:56.002 "nguid": "1CD70EDFAE8045249A933F1C0D2C1BFE", 00:08:56.002 "uuid": "1cd70edf-ae80-4524-9a93-3f1c0d2c1bfe" 00:08:56.002 } 00:08:56.002 ] 00:08:56.002 } 00:08:56.002 ] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.002 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.263 rmmod nvme_tcp 00:08:56.263 rmmod nvme_fabrics 00:08:56.263 rmmod nvme_keyring 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2925757 ']' 00:08:56.263 16:17:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2925757 00:08:56.264 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 2925757 ']' 00:08:56.264 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 2925757 00:08:56.264 16:17:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2925757 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2925757' 00:08:56.264 killing process with pid 2925757 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 2925757 00:08:56.264 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 2925757 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.525 16:17:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.439 16:17:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.439 00:08:58.439 real 0m10.795s 00:08:58.439 user 0m8.288s 00:08:58.439 sys 0m5.398s 00:08:58.439 16:17:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:58.439 16:17:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.439 ************************************ 00:08:58.439 END TEST nvmf_target_discovery 00:08:58.439 ************************************ 00:08:58.701 16:17:25 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:58.701 16:17:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:58.701 16:17:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:58.701 16:17:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 ************************************ 00:08:58.701 START TEST nvmf_referrals 00:08:58.701 ************************************ 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:58.701 * Looking for test storage... 00:08:58.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.701 16:17:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.702 16:17:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:06.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:06.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:06.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:06.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.852 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:09:06.853 00:09:06.853 --- 10.0.0.2 ping statistics --- 00:09:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.853 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:09:06.853 00:09:06.853 --- 10.0.0.1 ping statistics --- 00:09:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.853 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2930388 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2930388 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 2930388 ']' 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:06.853 16:17:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 [2024-06-07 16:17:32.653377] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:09:06.853 [2024-06-07 16:17:32.653455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.853 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.853 [2024-06-07 16:17:32.726810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.853 [2024-06-07 16:17:32.802275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.853 [2024-06-07 16:17:32.802316] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.853 [2024-06-07 16:17:32.802324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.853 [2024-06-07 16:17:32.802331] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.853 [2024-06-07 16:17:32.802337] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.853 [2024-06-07 16:17:32.802509] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.853 [2024-06-07 16:17:32.802734] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.853 [2024-06-07 16:17:32.802891] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.853 [2024-06-07 16:17:32.802892] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 [2024-06-07 16:17:33.484056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 [2024-06-07 16:17:33.500270] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:06.853 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.116 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:07.377 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:07.377 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:07.377 16:17:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:07.377 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.377 16:17:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:07.377 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:07.639 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:07.901 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.163 16:17:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:08.163 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.425 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:08.426 rmmod nvme_tcp 00:09:08.426 rmmod nvme_fabrics 00:09:08.426 rmmod nvme_keyring 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2930388 ']' 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2930388 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 2930388 ']' 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 2930388 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:08.426 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2930388 00:09:08.686 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:08.686 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2930388' 00:09:08.687 killing process with pid 2930388 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 2930388 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 2930388 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.687 16:17:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.273 16:17:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.273 00:09:11.273 real 0m12.149s 00:09:11.273 user 0m13.380s 00:09:11.273 sys 0m5.956s 00:09:11.273 16:17:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:11.273 16:17:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.273 ************************************ 00:09:11.273 END TEST nvmf_referrals 00:09:11.273 ************************************ 00:09:11.273 16:17:37 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:11.273 16:17:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:11.273 16:17:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:11.273 16:17:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.273 ************************************ 00:09:11.273 START TEST nvmf_connect_disconnect 00:09:11.273 ************************************ 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:11.273 * Looking for test storage... 00:09:11.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.273 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.274 16:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:17.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:17.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:17.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:17.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.872 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.133 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.133 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:09:18.134 00:09:18.134 --- 10.0.0.2 ping statistics --- 00:09:18.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.134 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.468 ms 00:09:18.134 00:09:18.134 --- 10.0.0.1 ping statistics --- 00:09:18.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.134 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2935165 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2935165 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 2935165 ']' 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:18.134 16:17:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.134 [2024-06-07 16:17:44.889342] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:09:18.134 [2024-06-07 16:17:44.889413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.134 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.134 [2024-06-07 16:17:44.960639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.394 [2024-06-07 16:17:45.035222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.394 [2024-06-07 16:17:45.035259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.394 [2024-06-07 16:17:45.035271] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.394 [2024-06-07 16:17:45.035278] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.394 [2024-06-07 16:17:45.035283] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.394 [2024-06-07 16:17:45.035441] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.394 [2024-06-07 16:17:45.035537] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.394 [2024-06-07 16:17:45.035679] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.394 [2024-06-07 16:17:45.035680] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 [2024-06-07 16:17:45.721049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:18.965 [2024-06-07 16:17:45.780410] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:18.965 16:17:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:23.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.281 16:18:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.281 rmmod nvme_tcp 00:09:37.281 rmmod nvme_fabrics 00:09:37.281 rmmod nvme_keyring 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2935165 ']' 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2935165 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2935165 ']' 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 2935165 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2935165 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2935165' 00:09:37.281 killing process with pid 2935165 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 2935165 00:09:37.281 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 2935165 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.542 16:18:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.457 16:18:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.719 00:09:39.719 real 0m28.742s 00:09:39.719 user 1m18.461s 00:09:39.719 sys 0m6.581s 00:09:39.719 16:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:39.719 16:18:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.719 ************************************ 00:09:39.719 END TEST nvmf_connect_disconnect 00:09:39.719 ************************************ 00:09:39.719 16:18:06 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:39.719 16:18:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:39.719 16:18:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:39.719 16:18:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.720 ************************************ 00:09:39.720 START TEST nvmf_multitarget 00:09:39.720 ************************************ 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:39.720 * Looking for test storage... 00:09:39.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.720 16:18:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:46.352 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:46.352 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:46.352 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:46.352 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.352 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:09:46.613 00:09:46.613 --- 10.0.0.2 ping statistics --- 00:09:46.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.613 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:46.613 00:09:46.613 --- 10.0.0.1 ping statistics --- 00:09:46.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.613 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.613 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2943579 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2943579 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2943579 ']' 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:46.873 16:18:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:46.873 [2024-06-07 16:18:13.539357] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:09:46.873 [2024-06-07 16:18:13.539412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.873 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.873 [2024-06-07 16:18:13.606573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.873 [2024-06-07 16:18:13.674625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.873 [2024-06-07 16:18:13.674659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.873 [2024-06-07 16:18:13.674666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.873 [2024-06-07 16:18:13.674672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.873 [2024-06-07 16:18:13.674678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.873 [2024-06-07 16:18:13.674813] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.873 [2024-06-07 16:18:13.674931] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.873 [2024-06-07 16:18:13.675087] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.873 [2024-06-07 16:18:13.675088] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:47.817 "nvmf_tgt_1" 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:47.817 "nvmf_tgt_2" 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:47.817 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:48.077 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:48.077 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:48.077 true 00:09:48.077 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:48.077 true 00:09:48.338 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:48.338 16:18:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.338 rmmod nvme_tcp 00:09:48.338 rmmod nvme_fabrics 00:09:48.338 rmmod nvme_keyring 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2943579 ']' 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2943579 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2943579 ']' 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2943579 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2943579 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2943579' 00:09:48.338 killing process with pid 2943579 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2943579 00:09:48.338 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2943579 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.600 16:18:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.514 16:18:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.514 00:09:50.514 real 0m10.979s 00:09:50.514 user 0m9.057s 00:09:50.514 sys 0m5.662s 00:09:50.514 16:18:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:50.514 16:18:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:50.514 ************************************ 00:09:50.514 END TEST nvmf_multitarget 00:09:50.514 ************************************ 00:09:50.775 16:18:17 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:50.775 16:18:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:50.775 16:18:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:50.775 16:18:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.775 ************************************ 00:09:50.775 START TEST nvmf_rpc 00:09:50.775 ************************************ 00:09:50.775 16:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:50.775 * Looking for test storage... 00:09:50.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.776 16:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:58.921 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:58.921 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:58.921 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:58.921 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.921 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:09:58.922 00:09:58.922 --- 10.0.0.2 ping statistics --- 00:09:58.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.922 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:09:58.922 00:09:58.922 --- 10.0.0.1 ping statistics --- 00:09:58.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.922 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2948041 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2948041 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2948041 ']' 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:58.922 16:18:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 [2024-06-07 16:18:24.641516] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:09:58.922 [2024-06-07 16:18:24.641579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.922 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.922 [2024-06-07 16:18:24.712798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.922 [2024-06-07 16:18:24.787189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.922 [2024-06-07 16:18:24.787226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.922 [2024-06-07 16:18:24.787234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.922 [2024-06-07 16:18:24.787240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.922 [2024-06-07 16:18:24.787246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.922 [2024-06-07 16:18:24.787384] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.922 [2024-06-07 16:18:24.787502] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.922 [2024-06-07 16:18:24.787815] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.922 [2024-06-07 16:18:24.787816] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:58.922 "tick_rate": 2400000000, 00:09:58.922 "poll_groups": [ 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_000", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [] 00:09:58.922 }, 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_001", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [] 00:09:58.922 }, 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_002", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [] 00:09:58.922 }, 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_003", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [] 00:09:58.922 } 00:09:58.922 ] 00:09:58.922 }' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 [2024-06-07 16:18:25.581295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:58.922 "tick_rate": 2400000000, 00:09:58.922 "poll_groups": [ 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_000", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [ 00:09:58.922 { 00:09:58.922 "trtype": "TCP" 00:09:58.922 } 00:09:58.922 ] 00:09:58.922 }, 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_001", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [ 00:09:58.922 { 00:09:58.922 "trtype": "TCP" 00:09:58.922 } 00:09:58.922 ] 00:09:58.922 }, 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_002", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [ 00:09:58.922 { 00:09:58.922 "trtype": "TCP" 00:09:58.922 } 00:09:58.922 ] 00:09:58.922 }, 00:09:58.922 { 00:09:58.922 "name": "nvmf_tgt_poll_group_003", 00:09:58.922 "admin_qpairs": 0, 00:09:58.922 "io_qpairs": 0, 00:09:58.922 "current_admin_qpairs": 0, 00:09:58.922 "current_io_qpairs": 0, 00:09:58.922 "pending_bdev_io": 0, 00:09:58.922 "completed_nvme_io": 0, 00:09:58.922 "transports": [ 00:09:58.922 { 00:09:58.922 "trtype": "TCP" 00:09:58.922 } 00:09:58.922 ] 00:09:58.922 } 00:09:58.922 ] 00:09:58.922 }' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:58.922 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.923 Malloc1 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:58.923 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.923 [2024-06-07 16:18:25.769091] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.184 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.184 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:59.184 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:59.184 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:59.184 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:59.185 [2024-06-07 16:18:25.795959] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:59.185 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:59.185 could not add new controller: failed to write to nvme-fabrics device 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:59.185 16:18:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.569 16:18:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.569 16:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:00.569 16:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.570 16:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:00.570 16:18:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.114 [2024-06-07 16:18:29.550453] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:10:03.114 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:03.114 could not add new controller: failed to write to nvme-fabrics device 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:03.114 16:18:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.498 16:18:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.498 16:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:04.498 16:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.498 16:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:04.498 16:18:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 [2024-06-07 16:18:33.220706] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:06.413 16:18:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.325 16:18:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.325 16:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:08.325 16:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.325 16:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:08.325 16:18:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 [2024-06-07 16:18:36.965781] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.239 16:18:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.217 16:18:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.217 16:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:12.217 16:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.217 16:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:12.217 16:18:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 [2024-06-07 16:18:40.707856] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:14.131 16:18:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.515 16:18:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.515 16:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:15.515 16:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.515 16:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:15.515 16:18:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.059 [2024-06-07 16:18:44.458311] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:18.059 16:18:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.443 16:18:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.443 16:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:19.443 16:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.443 16:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:19.443 16:18:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 [2024-06-07 16:18:48.200265] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.359 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.620 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.620 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:21.620 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.620 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.620 16:18:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.620 16:18:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.006 16:18:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.006 16:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:10:23.006 16:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.006 16:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:23.006 16:18:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:10:24.920 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 [2024-06-07 16:18:51.917872] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 [2024-06-07 16:18:51.978002] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.182 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 [2024-06-07 16:18:52.042174] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 [2024-06-07 16:18:52.102355] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:25.443 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 [2024-06-07 16:18:52.162561] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:25.444 "tick_rate": 2400000000, 00:10:25.444 "poll_groups": [ 00:10:25.444 { 00:10:25.444 "name": "nvmf_tgt_poll_group_000", 00:10:25.444 "admin_qpairs": 0, 00:10:25.444 "io_qpairs": 224, 00:10:25.444 "current_admin_qpairs": 0, 00:10:25.444 "current_io_qpairs": 0, 00:10:25.444 "pending_bdev_io": 0, 00:10:25.444 "completed_nvme_io": 224, 00:10:25.444 "transports": [ 00:10:25.444 { 00:10:25.444 "trtype": "TCP" 00:10:25.444 } 00:10:25.444 ] 00:10:25.444 }, 00:10:25.444 { 00:10:25.444 "name": "nvmf_tgt_poll_group_001", 00:10:25.444 "admin_qpairs": 1, 00:10:25.444 "io_qpairs": 223, 00:10:25.444 "current_admin_qpairs": 0, 00:10:25.444 "current_io_qpairs": 0, 00:10:25.444 "pending_bdev_io": 0, 00:10:25.444 "completed_nvme_io": 325, 00:10:25.444 "transports": [ 00:10:25.444 { 00:10:25.444 "trtype": "TCP" 00:10:25.444 } 00:10:25.444 ] 00:10:25.444 }, 00:10:25.444 { 00:10:25.444 "name": "nvmf_tgt_poll_group_002", 00:10:25.444 "admin_qpairs": 6, 00:10:25.444 "io_qpairs": 218, 00:10:25.444 "current_admin_qpairs": 0, 00:10:25.444 "current_io_qpairs": 0, 00:10:25.444 "pending_bdev_io": 0, 00:10:25.444 "completed_nvme_io": 466, 00:10:25.444 "transports": [ 00:10:25.444 { 00:10:25.444 "trtype": "TCP" 00:10:25.444 } 00:10:25.444 ] 00:10:25.444 }, 00:10:25.444 { 00:10:25.444 "name": "nvmf_tgt_poll_group_003", 00:10:25.444 "admin_qpairs": 0, 00:10:25.444 "io_qpairs": 224, 00:10:25.444 "current_admin_qpairs": 0, 00:10:25.444 "current_io_qpairs": 0, 00:10:25.444 "pending_bdev_io": 0, 00:10:25.444 "completed_nvme_io": 224, 00:10:25.444 "transports": [ 00:10:25.444 { 00:10:25.444 "trtype": "TCP" 00:10:25.444 } 00:10:25.444 ] 00:10:25.444 } 00:10:25.444 ] 00:10:25.444 }' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:25.444 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.705 rmmod nvme_tcp 00:10:25.705 rmmod nvme_fabrics 00:10:25.705 rmmod nvme_keyring 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2948041 ']' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2948041 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2948041 ']' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2948041 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2948041 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2948041' 00:10:25.705 killing process with pid 2948041 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2948041 00:10:25.705 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2948041 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.965 16:18:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.888 16:18:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.888 00:10:27.888 real 0m37.241s 00:10:27.888 user 1m53.053s 00:10:27.888 sys 0m7.099s 00:10:27.888 16:18:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:27.888 16:18:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.888 ************************************ 00:10:27.888 END TEST nvmf_rpc 00:10:27.888 ************************************ 00:10:27.888 16:18:54 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:27.888 16:18:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:27.888 16:18:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:27.888 16:18:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.888 ************************************ 00:10:27.888 START TEST nvmf_invalid 00:10:27.888 ************************************ 00:10:27.888 16:18:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:28.149 * Looking for test storage... 00:10:28.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:28.149 16:18:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:34.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:34.736 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:34.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:34.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:34.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.737 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.998 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:35.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:10:35.259 00:10:35.259 --- 10.0.0.2 ping statistics --- 00:10:35.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.259 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:10:35.259 00:10:35.259 --- 10.0.0.1 ping statistics --- 00:10:35.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.259 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2957814 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2957814 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2957814 ']' 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:35.259 16:19:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:35.259 [2024-06-07 16:19:02.004772] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:10:35.259 [2024-06-07 16:19:02.004839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.259 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.259 [2024-06-07 16:19:02.076317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.520 [2024-06-07 16:19:02.151171] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.520 [2024-06-07 16:19:02.151207] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.520 [2024-06-07 16:19:02.151215] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.520 [2024-06-07 16:19:02.151221] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.520 [2024-06-07 16:19:02.151227] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.520 [2024-06-07 16:19:02.151366] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.520 [2024-06-07 16:19:02.151484] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.520 [2024-06-07 16:19:02.151812] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.520 [2024-06-07 16:19:02.151814] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:36.092 16:19:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31487 00:10:36.352 [2024-06-07 16:19:02.961327] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:36.352 16:19:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:36.352 { 00:10:36.352 "nqn": "nqn.2016-06.io.spdk:cnode31487", 00:10:36.352 "tgt_name": "foobar", 00:10:36.352 "method": "nvmf_create_subsystem", 00:10:36.352 "req_id": 1 00:10:36.352 } 00:10:36.352 Got JSON-RPC error response 00:10:36.352 response: 00:10:36.352 { 00:10:36.352 "code": -32603, 00:10:36.352 "message": "Unable to find target foobar" 00:10:36.352 }' 00:10:36.353 16:19:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:36.353 { 00:10:36.353 "nqn": "nqn.2016-06.io.spdk:cnode31487", 00:10:36.353 "tgt_name": "foobar", 00:10:36.353 "method": "nvmf_create_subsystem", 00:10:36.353 "req_id": 1 00:10:36.353 } 00:10:36.353 Got JSON-RPC error response 00:10:36.353 response: 00:10:36.353 { 00:10:36.353 "code": -32603, 00:10:36.353 "message": "Unable to find target foobar" 00:10:36.353 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:36.353 16:19:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:36.353 16:19:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27615 00:10:36.353 [2024-06-07 16:19:03.137917] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27615: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:36.353 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:36.353 { 00:10:36.353 "nqn": "nqn.2016-06.io.spdk:cnode27615", 00:10:36.353 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:36.353 "method": "nvmf_create_subsystem", 00:10:36.353 "req_id": 1 00:10:36.353 } 00:10:36.353 Got JSON-RPC error response 00:10:36.353 response: 00:10:36.353 { 00:10:36.353 "code": -32602, 00:10:36.353 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:36.353 }' 00:10:36.353 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:36.353 { 00:10:36.353 "nqn": "nqn.2016-06.io.spdk:cnode27615", 00:10:36.353 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:36.353 "method": "nvmf_create_subsystem", 00:10:36.353 "req_id": 1 00:10:36.353 } 00:10:36.353 Got JSON-RPC error response 00:10:36.353 response: 00:10:36.353 { 00:10:36.353 "code": -32602, 00:10:36.353 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:36.353 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:36.353 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:36.353 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8898 00:10:36.645 [2024-06-07 16:19:03.314518] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8898: invalid model number 'SPDK_Controller' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:36.645 { 00:10:36.645 "nqn": "nqn.2016-06.io.spdk:cnode8898", 00:10:36.645 "model_number": "SPDK_Controller\u001f", 00:10:36.645 "method": "nvmf_create_subsystem", 00:10:36.645 "req_id": 1 00:10:36.645 } 00:10:36.645 Got JSON-RPC error response 00:10:36.645 response: 00:10:36.645 { 00:10:36.645 "code": -32602, 00:10:36.645 "message": "Invalid MN SPDK_Controller\u001f" 00:10:36.645 }' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:36.645 { 00:10:36.645 "nqn": "nqn.2016-06.io.spdk:cnode8898", 00:10:36.645 "model_number": "SPDK_Controller\u001f", 00:10:36.645 "method": "nvmf_create_subsystem", 00:10:36.645 "req_id": 1 00:10:36.645 } 00:10:36.645 Got JSON-RPC error response 00:10:36.645 response: 00:10:36.645 { 00:10:36.645 "code": -32602, 00:10:36.645 "message": "Invalid MN SPDK_Controller\u001f" 00:10:36.645 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.645 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.646 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ggJ9,UV0;@zhfHX-[r:M8' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ggJ9,UV0;@zhfHX-[r:M8' nqn.2016-06.io.spdk:cnode15212 00:10:36.913 [2024-06-07 16:19:03.647599] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15212: invalid serial number 'ggJ9,UV0;@zhfHX-[r:M8' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:36.913 { 00:10:36.913 "nqn": "nqn.2016-06.io.spdk:cnode15212", 00:10:36.913 "serial_number": "ggJ9,UV0;@zhfHX-[r:M8", 00:10:36.913 "method": "nvmf_create_subsystem", 00:10:36.913 "req_id": 1 00:10:36.913 } 00:10:36.913 Got JSON-RPC error response 00:10:36.913 response: 00:10:36.913 { 00:10:36.913 "code": -32602, 00:10:36.913 "message": "Invalid SN ggJ9,UV0;@zhfHX-[r:M8" 00:10:36.913 }' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:36.913 { 00:10:36.913 "nqn": "nqn.2016-06.io.spdk:cnode15212", 00:10:36.913 "serial_number": "ggJ9,UV0;@zhfHX-[r:M8", 00:10:36.913 "method": "nvmf_create_subsystem", 00:10:36.913 "req_id": 1 00:10:36.913 } 00:10:36.913 Got JSON-RPC error response 00:10:36.913 response: 00:10:36.913 { 00:10:36.913 "code": -32602, 00:10:36.913 "message": "Invalid SN ggJ9,UV0;@zhfHX-[r:M8" 00:10:36.913 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.913 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:36.914 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.175 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.176 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:10:37.177 16:19:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 't.'\''o<7?|J{ /dev/null' 00:10:39.263 16:19:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.179 16:19:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:41.179 00:10:41.179 real 0m13.254s 00:10:41.179 user 0m19.296s 00:10:41.179 sys 0m6.107s 00:10:41.179 16:19:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.179 16:19:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:41.179 ************************************ 00:10:41.179 END TEST nvmf_invalid 00:10:41.179 ************************************ 00:10:41.441 16:19:08 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:41.441 16:19:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:41.441 16:19:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:41.441 16:19:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.441 ************************************ 00:10:41.441 START TEST nvmf_abort 00:10:41.441 ************************************ 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:41.441 * Looking for test storage... 00:10:41.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.441 16:19:08 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:41.442 16:19:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:48.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:48.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:48.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.033 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:48.034 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.034 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.295 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.295 16:19:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.295 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.295 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.295 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.295 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.295 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:10:48.295 00:10:48.295 --- 10.0.0.2 ping statistics --- 00:10:48.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.295 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:10:48.295 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:10:48.557 00:10:48.557 --- 10.0.0.1 ping statistics --- 00:10:48.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.557 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2962840 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2962840 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2962840 ']' 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:48.557 16:19:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:48.557 [2024-06-07 16:19:15.257233] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:10:48.557 [2024-06-07 16:19:15.257302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.557 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.557 [2024-06-07 16:19:15.345216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.818 [2024-06-07 16:19:15.439663] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.818 [2024-06-07 16:19:15.439723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.818 [2024-06-07 16:19:15.439732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.818 [2024-06-07 16:19:15.439739] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.818 [2024-06-07 16:19:15.439745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.818 [2024-06-07 16:19:15.439883] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.818 [2024-06-07 16:19:15.440053] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.818 [2024-06-07 16:19:15.440053] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 [2024-06-07 16:19:16.082986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 Malloc0 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 Delay0 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 [2024-06-07 16:19:16.157901] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:49.389 16:19:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.390 16:19:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:49.390 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.649 [2024-06-07 16:19:16.308606] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:52.195 Initializing NVMe Controllers 00:10:52.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:52.195 controller IO queue size 128 less than required 00:10:52.195 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:52.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:52.195 Initialization complete. Launching workers. 00:10:52.195 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34310 00:10:52.195 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34375, failed to submit 62 00:10:52.195 success 34314, unsuccess 61, failed 0 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.195 rmmod nvme_tcp 00:10:52.195 rmmod nvme_fabrics 00:10:52.195 rmmod nvme_keyring 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2962840 ']' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2962840 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2962840 ']' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2962840 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2962840 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2962840' 00:10:52.195 killing process with pid 2962840 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2962840 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2962840 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.195 16:19:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.109 16:19:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:54.109 00:10:54.109 real 0m12.755s 00:10:54.109 user 0m13.925s 00:10:54.109 sys 0m5.970s 00:10:54.109 16:19:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:54.109 16:19:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:54.109 ************************************ 00:10:54.109 END TEST nvmf_abort 00:10:54.109 ************************************ 00:10:54.109 16:19:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:54.109 16:19:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:54.109 16:19:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:54.109 16:19:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.109 ************************************ 00:10:54.109 START TEST nvmf_ns_hotplug_stress 00:10:54.109 ************************************ 00:10:54.109 16:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:54.371 * Looking for test storage... 00:10:54.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.371 16:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.513 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:02.514 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:02.514 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:02.514 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:02.514 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.514 16:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:11:02.514 00:11:02.514 --- 10.0.0.2 ping statistics --- 00:11:02.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.514 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:11:02.514 00:11:02.514 --- 10.0.0.1 ping statistics --- 00:11:02.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.514 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2967672 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2967672 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2967672 ']' 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:02.514 16:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.514 [2024-06-07 16:19:28.257620] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:11:02.514 [2024-06-07 16:19:28.257682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.514 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.514 [2024-06-07 16:19:28.346498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.514 [2024-06-07 16:19:28.441803] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.514 [2024-06-07 16:19:28.441863] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.514 [2024-06-07 16:19:28.441871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.514 [2024-06-07 16:19:28.441879] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.514 [2024-06-07 16:19:28.441884] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.514 [2024-06-07 16:19:28.442031] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.514 [2024-06-07 16:19:28.442195] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.514 [2024-06-07 16:19:28.442196] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.514 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:02.514 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:02.515 [2024-06-07 16:19:29.211777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.515 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.775 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.775 [2024-06-07 16:19:29.549224] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.775 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:03.035 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:03.295 Malloc0 00:11:03.295 16:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:03.295 Delay0 00:11:03.295 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.556 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:03.556 NULL1 00:11:03.816 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:03.816 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2968093 00:11:03.817 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:03.817 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:03.817 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.817 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.077 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.077 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:04.077 16:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:04.375 [2024-06-07 16:19:31.052199] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:11:04.375 true 00:11:04.375 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:04.376 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.650 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.650 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:04.650 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:04.911 true 00:11:04.911 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:04.911 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.911 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.171 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:05.171 16:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:05.431 true 00:11:05.431 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:05.431 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.431 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.692 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:05.692 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:05.951 true 00:11:05.951 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:05.951 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.951 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.211 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:06.211 16:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:06.472 true 00:11:06.472 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:06.472 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.472 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.732 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:06.732 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:06.993 true 00:11:06.993 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:06.993 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.993 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.254 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:07.254 16:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:07.254 true 00:11:07.516 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:07.516 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.516 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.776 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:07.777 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:07.777 true 00:11:07.777 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:07.777 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.038 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.298 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:08.298 16:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:08.298 true 00:11:08.298 16:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:08.298 16:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.240 Read completed with error (sct=0, sc=11) 00:11:09.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.240 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.501 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:09.501 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:09.501 true 00:11:09.501 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:09.501 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.762 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.023 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:10.023 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:10.023 true 00:11:10.023 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:10.023 16:19:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.284 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.545 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:10.545 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:10.545 true 00:11:10.545 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:10.545 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.805 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.067 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:11.067 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:11.067 true 00:11:11.067 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:11.067 16:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.328 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.589 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:11.589 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:11.589 true 00:11:11.589 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:11.589 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.850 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.110 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:12.110 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:12.110 true 00:11:12.110 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:12.110 16:19:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.371 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.371 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:12.371 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:12.632 true 00:11:12.632 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:12.632 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.892 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.892 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:12.892 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:13.153 true 00:11:13.153 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:13.153 16:19:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.412 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.412 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:13.412 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:13.672 true 00:11:13.672 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:13.672 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.933 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.933 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:13.933 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:14.194 true 00:11:14.194 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:14.194 16:19:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.455 16:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.455 16:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:14.455 16:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:14.716 true 00:11:14.716 16:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:14.716 16:19:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.659 16:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.659 16:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:15.659 16:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:15.920 true 00:11:15.920 16:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:15.920 16:19:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.862 16:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.862 16:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:16.862 16:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:17.123 true 00:11:17.123 16:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:17.123 16:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.123 16:19:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.383 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:17.383 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:17.645 true 00:11:17.645 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:17.645 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.645 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.932 [2024-06-07 16:19:44.612126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.612995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.613988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.614971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.932 [2024-06-07 16:19:44.615001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.933 [2024-06-07 16:19:44.615576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.615997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.616979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.617978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.618185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.618233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.618263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.618302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.933 [2024-06-07 16:19:44.618329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.618981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.619968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.620979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.621990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.622992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.623992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.624016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.624038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.624060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.934 [2024-06-07 16:19:44.624083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.624981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.625887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.626991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.627981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.628623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.935 [2024-06-07 16:19:44.629685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.629978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.630593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.631996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.632984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.633978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.634984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.635999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.936 [2024-06-07 16:19:44.636189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.636980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.637991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.638983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.639983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.640980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.937 [2024-06-07 16:19:44.641609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.641973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.642994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:17.938 [2024-06-07 16:19:44.643554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:17.938 [2024-06-07 16:19:44.643910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.643990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.644977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.645984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.646977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.647982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.938 [2024-06-07 16:19:44.648010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.648966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.939 [2024-06-07 16:19:44.648993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.649978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.650997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.651914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.652981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.653983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.939 [2024-06-07 16:19:44.654668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.654992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.655995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.656976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.657973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.658983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.659973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.940 [2024-06-07 16:19:44.660706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.660999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.661996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.662976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.663977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.664969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.665985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.666999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.941 [2024-06-07 16:19:44.667242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.667993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.668996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.669995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.670997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.671984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.672766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.673108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.673144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.673172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.673200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.673227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.942 [2024-06-07 16:19:44.673256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.673980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.674790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.675991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.676993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.677991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.678989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.943 [2024-06-07 16:19:44.679530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.679991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.680978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.681975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.682989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.944 [2024-06-07 16:19:44.683969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.683998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.684984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.944 [2024-06-07 16:19:44.685797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.685826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.685853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.685884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.685915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.686533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.687994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.688975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.689973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.690987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.945 [2024-06-07 16:19:44.691410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.691981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.692970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.693999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.694974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.695992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.696969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.946 [2024-06-07 16:19:44.697687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.697977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.698989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.699995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.700997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.701981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.947 [2024-06-07 16:19:44.702990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.703972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.704989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.705961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.706997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.707994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.708988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.948 [2024-06-07 16:19:44.709572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.709878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.710976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.711980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.712695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.713986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.714593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.949 [2024-06-07 16:19:44.715753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.715975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.716983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.717979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.718980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.950 [2024-06-07 16:19:44.719009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.719995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.720999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.721975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.722003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.722036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.950 [2024-06-07 16:19:44.722069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.722973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.723975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.724976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.725977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.726980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.727947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.951 [2024-06-07 16:19:44.728690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.728993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.729777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.730987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.731972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.732676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.733986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.952 [2024-06-07 16:19:44.734268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.734998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.735991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.736981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.737973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.738983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.739996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.953 [2024-06-07 16:19:44.740716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.740978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.741999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.742958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.743986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.744979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.745764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.954 [2024-06-07 16:19:44.746805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.746997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.747901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.748975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.749992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.750970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.751988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.752983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.955 [2024-06-07 16:19:44.753453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.753979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:17.956 [2024-06-07 16:19:44.754154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.754728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.755997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.756979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.956 [2024-06-07 16:19:44.757942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.757972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.758994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:17.957 [2024-06-07 16:19:44.759779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.759814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.759846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.759876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.760996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.268 [2024-06-07 16:19:44.761193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.761984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.762985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.763995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.269 [2024-06-07 16:19:44.764423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.764452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.764891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.764930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.764958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.764984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.765978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.766798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.270 [2024-06-07 16:19:44.767383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.767976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.768879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.769985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.271 [2024-06-07 16:19:44.770448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.770917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.771994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.772960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.272 [2024-06-07 16:19:44.773783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.773983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.774998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.273 [2024-06-07 16:19:44.775926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.775956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.775984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.776975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.777970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.778974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.274 [2024-06-07 16:19:44.779279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.779876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.780992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.781979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.275 [2024-06-07 16:19:44.782923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.782953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.782980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.783984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.784986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.785970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.276 [2024-06-07 16:19:44.786279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.786747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.787983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 true 00:11:18.277 [2024-06-07 16:19:44.788532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.788993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.277 [2024-06-07 16:19:44.789439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.277 [2024-06-07 16:19:44.789875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.789922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.789950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.789985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.790995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.791977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.278 [2024-06-07 16:19:44.792776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.792982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.793659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.794973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.795840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.279 [2024-06-07 16:19:44.796389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.796979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.797997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.798963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.280 [2024-06-07 16:19:44.799828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.799856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.799883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.799910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.799941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.799973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.800997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.801985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.802719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.281 [2024-06-07 16:19:44.803505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.803991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.804994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.805982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.282 [2024-06-07 16:19:44.806460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.806995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.807988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.808980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.283 [2024-06-07 16:19:44.809966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.809994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.810993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.811978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.812917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.284 [2024-06-07 16:19:44.813308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.813988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:18.285 [2024-06-07 16:19:44.814383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.814513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.285 [2024-06-07 16:19:44.815017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.815980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.285 [2024-06-07 16:19:44.816582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.816980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.817979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.818977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.286 [2024-06-07 16:19:44.819753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.819982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.820986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.821982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.287 [2024-06-07 16:19:44.822181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.822636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.823981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.824874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.825172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.288 [2024-06-07 16:19:44.825205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.288 [2024-06-07 16:19:44.825235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.825981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.826819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.827975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.289 [2024-06-07 16:19:44.828345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.828987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.829988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.290 [2024-06-07 16:19:44.830916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.830944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.830975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.831794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.832991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.833747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.291 [2024-06-07 16:19:44.834453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.834970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.835992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.292 [2024-06-07 16:19:44.836498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.836977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.837920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.838982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.293 [2024-06-07 16:19:44.839917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.839942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.839965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.839988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.840983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.841998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.842991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.294 [2024-06-07 16:19:44.843337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.843984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.844985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.845994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.295 [2024-06-07 16:19:44.846509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.846978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.847977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.848950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.849965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.850001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.850030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.850060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.850090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.296 [2024-06-07 16:19:44.850123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.850996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.851747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.297 [2024-06-07 16:19:44.852872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.852895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.852918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.852942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.852965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.852988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.853666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.854985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.855999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.298 [2024-06-07 16:19:44.856293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.856979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.857970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.299 [2024-06-07 16:19:44.858657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.858974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.299 [2024-06-07 16:19:44.859499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.859988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.860982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.861868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.300 [2024-06-07 16:19:44.862848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.862878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.862921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.862949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.862977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.863978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.864697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.301 [2024-06-07 16:19:44.865738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.865995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.866651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.867996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.868995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.869025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.869053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.302 [2024-06-07 16:19:44.869083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.869974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.870996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.871979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.303 [2024-06-07 16:19:44.872437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.872996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.873976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.874991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.875952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.876007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.876035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.876085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.876115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.304 [2024-06-07 16:19:44.876143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.876998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.877918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.878991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.305 [2024-06-07 16:19:44.879269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.879884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.880989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.881999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.306 [2024-06-07 16:19:44.882230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.882997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.883978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.884997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.307 [2024-06-07 16:19:44.885738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.885993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.886975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.887979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.888984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.308 [2024-06-07 16:19:44.889611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.889995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.890972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.891989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.892989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.893010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.309 [2024-06-07 16:19:44.893034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.310 [2024-06-07 16:19:44.893868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.893984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.894969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.895990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.310 [2024-06-07 16:19:44.896479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.896988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.897697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.898993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.899628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.900046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.900077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.900107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.311 [2024-06-07 16:19:44.900135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.900973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.901988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.902983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.903975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.904003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.312 [2024-06-07 16:19:44.904029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.904992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.905712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.313 [2024-06-07 16:19:44.906827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.906855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.906881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.906909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.906937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.906969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.906997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.907981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.908978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.314 [2024-06-07 16:19:44.909991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.910973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.911972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.315 [2024-06-07 16:19:44.912920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.912950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.912979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.913978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.914986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.915918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.316 [2024-06-07 16:19:44.916750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.916976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.917992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.918988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.317 [2024-06-07 16:19:44.919800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.920978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.921987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.922981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.318 [2024-06-07 16:19:44.923359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.923979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.924991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.925974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.319 [2024-06-07 16:19:44.926443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.926947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.320 [2024-06-07 16:19:44.927794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.927975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.928996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.320 [2024-06-07 16:19:44.929729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.929967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.930977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.931994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.321 [2024-06-07 16:19:44.932947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.932983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.933893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.934530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.935981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.322 [2024-06-07 16:19:44.936729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.936985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.937977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.938997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.323 [2024-06-07 16:19:44.939622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.939983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.940702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.941989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.324 [2024-06-07 16:19:44.942440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.942972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.943986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.944972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.325 [2024-06-07 16:19:44.945890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.945919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.945951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.945982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.946992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.947990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.948978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.326 [2024-06-07 16:19:44.949007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.949661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.950986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.951978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.327 [2024-06-07 16:19:44.952854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.952882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.952910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.952939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.952966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.952995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.953999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.954978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.328 [2024-06-07 16:19:44.955728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.955994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 [2024-06-07 16:19:44.956617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.329 16:19:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.620 [2024-06-07 16:19:45.133223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.133995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.134984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.135015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.135041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.135071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.135195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.135222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.620 [2024-06-07 16:19:45.135245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.135507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.136990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.137923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.621 [2024-06-07 16:19:45.138897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.138922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.138948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.138973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.139976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.140999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.141993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.142021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.142047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.142076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.622 [2024-06-07 16:19:45.142103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.142974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.143981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.144988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.623 [2024-06-07 16:19:45.145781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.145998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.146923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.147989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.624 [2024-06-07 16:19:45.148448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.148841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.149978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.150990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.151993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.625 [2024-06-07 16:19:45.152028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.152974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.153991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.154975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.626 [2024-06-07 16:19:45.155203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.155985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.156982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.157651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.627 [2024-06-07 16:19:45.158775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.158808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.158838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.158873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.158900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.158954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.158982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.159877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.160961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.161998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.162026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.162060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.628 [2024-06-07 16:19:45.162090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.162988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.629 [2024-06-07 16:19:45.163275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.163984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.164972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.629 [2024-06-07 16:19:45.165197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.165981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 16:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:18.630 16:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:18.630 [2024-06-07 16:19:45.166792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.166990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.167979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.630 [2024-06-07 16:19:45.168255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.168991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.169991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.170853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.631 [2024-06-07 16:19:45.171732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.171998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.172925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.173768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.174990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.632 [2024-06-07 16:19:45.175188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.175995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.176976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.633 [2024-06-07 16:19:45.177831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.177865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.177897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.177928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.177956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.177983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.178991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.179972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.180992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.634 [2024-06-07 16:19:45.181578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.181998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.182990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.183989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.635 [2024-06-07 16:19:45.184500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.184992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.185982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.186981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.636 [2024-06-07 16:19:45.187727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.187999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.188747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.189989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.190857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.191011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.637 [2024-06-07 16:19:45.191035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.191588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.192975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.638 [2024-06-07 16:19:45.193397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.193997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.194994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.195988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.639 [2024-06-07 16:19:45.196518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.196973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.197973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.640 [2024-06-07 16:19:45.198291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.198978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.640 [2024-06-07 16:19:45.199596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.199626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.199653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.199682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.200998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.201895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.641 [2024-06-07 16:19:45.202721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.202975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.203980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.204991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.642 [2024-06-07 16:19:45.205826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.206972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.207988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.208017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.208052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.208187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.208214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.208243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.643 [2024-06-07 16:19:45.208272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.208983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.209989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.210996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.644 [2024-06-07 16:19:45.211245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.211996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.212809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.213995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.645 [2024-06-07 16:19:45.214558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.214930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.214961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.214986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.215985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.216981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.217983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.218012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.218074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.218101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.218158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.646 [2024-06-07 16:19:45.218186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.218978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.219974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.220975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.647 [2024-06-07 16:19:45.221362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.221587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.222997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.223983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.648 [2024-06-07 16:19:45.224206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.224997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.225994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.226998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.649 [2024-06-07 16:19:45.227414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.227442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.227472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.227505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.227531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.227557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.227582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.228985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.229999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.650 [2024-06-07 16:19:45.230905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.230933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.230962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.231969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.651 [2024-06-07 16:19:45.232738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.232997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.233980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.234008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.651 [2024-06-07 16:19:45.234040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.234988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.235984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.652 [2024-06-07 16:19:45.236993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.237980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.238906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.239987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.653 [2024-06-07 16:19:45.240181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.240934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.241990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.242991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.654 [2024-06-07 16:19:45.243297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.243989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.244983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.245998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.655 [2024-06-07 16:19:45.246949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.246978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.247980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.248976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.249994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.656 [2024-06-07 16:19:45.250676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.250995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.251927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.252996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.657 [2024-06-07 16:19:45.253836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.253866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.253894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.253919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.253951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.253977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.254794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.255988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.256764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.658 [2024-06-07 16:19:45.257551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.257994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.258965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.259988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.260874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.659 [2024-06-07 16:19:45.261322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.261977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.262979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.263977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.660 [2024-06-07 16:19:45.264982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.265992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.266969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 Message suppressed 999 times: [2024-06-07 16:19:45.267737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 Read completed with error (sct=0, sc=15) 00:11:18.661 [2024-06-07 16:19:45.267766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.267967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.661 [2024-06-07 16:19:45.268638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.268987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.269990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.270984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.271773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.662 [2024-06-07 16:19:45.272505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.272981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.273984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.274569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.275982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.663 [2024-06-07 16:19:45.276565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.276945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.277971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.278990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.664 [2024-06-07 16:19:45.279434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.279989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.280972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.281990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.665 [2024-06-07 16:19:45.282284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.282978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.283980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.284982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.666 [2024-06-07 16:19:45.285390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.285996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.286973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.287750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.288990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.289015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.667 [2024-06-07 16:19:45.289040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.289970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.290992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.291978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.668 [2024-06-07 16:19:45.292643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.292979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.293978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.294990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.295765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.296265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.296298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.296327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.669 [2024-06-07 16:19:45.296355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.296999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.297975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.670 [2024-06-07 16:19:45.298966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.298997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.299975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.300985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.301991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.671 [2024-06-07 16:19:45.302209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.672 [2024-06-07 16:19:45.302702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.302996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.303996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.304641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.672 [2024-06-07 16:19:45.305853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.305882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.305910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.305939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.305965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.305994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.306872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.307998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.308975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.673 [2024-06-07 16:19:45.309682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.309980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.310983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 true 00:11:18.674 [2024-06-07 16:19:45.311474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.311977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.674 [2024-06-07 16:19:45.312620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.312988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.313863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.314984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.315970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.316005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.316040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.675 [2024-06-07 16:19:45.316076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.316978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.317988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.318700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.676 [2024-06-07 16:19:45.319861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.319893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.319917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.319940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.319963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.319985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.320765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.321979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.677 [2024-06-07 16:19:45.322964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.323976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.324971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.325993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.678 [2024-06-07 16:19:45.326548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.326991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.327956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.328988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.679 [2024-06-07 16:19:45.329329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.329979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.330985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.331990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.680 [2024-06-07 16:19:45.332787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.332979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.333994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.334980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.335996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.336028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.681 [2024-06-07 16:19:45.336063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.336997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.682 [2024-06-07 16:19:45.337163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.337961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 16:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:18.682 [2024-06-07 16:19:45.337991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 16:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.682 [2024-06-07 16:19:45.338344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.338935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.682 [2024-06-07 16:19:45.339674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.339983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.340901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.341989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.683 [2024-06-07 16:19:45.342308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.342979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.343981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.344984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.345973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.346000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.346023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.684 [2024-06-07 16:19:45.346046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.346985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.347979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.348991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.349020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.349046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.349077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.349103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.685 [2024-06-07 16:19:45.349132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.349986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.350975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.351987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.686 [2024-06-07 16:19:45.352754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.352981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.353976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.354990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.687 [2024-06-07 16:19:45.355740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.356985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.357981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.688 [2024-06-07 16:19:45.358635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.359979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.360613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.361990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.362020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.362047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.689 [2024-06-07 16:19:45.362075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.362981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.363986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.364975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.690 [2024-06-07 16:19:45.365806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.365992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.366987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.367998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.691 [2024-06-07 16:19:45.368750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.368777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.368807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.368837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.368875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.368913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.368947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.369991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.370993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.371716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.692 [2024-06-07 16:19:45.372103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.372128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.372152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.372175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.692 [2024-06-07 16:19:45.372198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.372988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.373665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.374982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.375012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.375040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.693 [2024-06-07 16:19:45.375076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.375980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.376984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.377999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.694 [2024-06-07 16:19:45.378861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.378885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.378913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.378944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.378971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.378999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.379996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.380982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.381976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.382004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.695 [2024-06-07 16:19:45.382032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.382760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.383989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.384980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.696 [2024-06-07 16:19:45.385622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.385987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.386964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.387873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.697 [2024-06-07 16:19:45.388593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.388977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.389992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.390981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.698 [2024-06-07 16:19:45.391828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.391856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.391894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.391922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.391960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.391987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.392983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.393985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.394982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.699 [2024-06-07 16:19:45.395423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.395993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.396981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.397974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.700 [2024-06-07 16:19:45.398899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.398922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.398951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.398978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.399977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.400987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.401980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.402007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.402034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.402067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.402096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.402126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.701 [2024-06-07 16:19:45.402155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.402819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.403979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.404994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.405341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.405386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.702 [2024-06-07 16:19:45.405417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.405991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.406799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.703 [2024-06-07 16:19:45.407191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.407982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.703 [2024-06-07 16:19:45.408551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.408924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.409427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.410984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.411981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.704 [2024-06-07 16:19:45.412388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.412992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.413715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.414993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.705 [2024-06-07 16:19:45.415670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.415698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.415725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.415751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.415780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.415807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.415837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.416990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.417990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.418979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.706 [2024-06-07 16:19:45.419451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.419978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.420806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.421979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.707 [2024-06-07 16:19:45.422712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.422747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.422780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.422811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.423993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.424984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.708 [2024-06-07 16:19:45.425900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.425928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.425956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.425985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.426998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.427982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.709 [2024-06-07 16:19:45.428819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.428850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.428879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.428908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.428932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.428964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.428993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.429765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.430982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.431806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.432182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.710 [2024-06-07 16:19:45.432218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.432997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.433968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.711 [2024-06-07 16:19:45.434720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.434977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.435998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.436994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.712 [2024-06-07 16:19:45.437748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.437985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.438988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.439981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.440802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.713 [2024-06-07 16:19:45.441317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:18.714 [2024-06-07 16:19:45.441813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.441984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.442986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.443993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.714 [2024-06-07 16:19:45.444591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.444965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:18.715 [2024-06-07 16:19:45.445550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:19.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.657 16:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.918 16:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:19.918 16:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:20.178 true 00:11:20.178 16:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:20.178 16:19:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.121 16:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.121 16:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:21.121 16:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:21.381 true 00:11:21.381 16:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:21.382 16:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.382 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.641 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:21.641 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:21.641 true 00:11:21.900 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:21.900 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.900 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.164 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:22.164 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:22.164 true 00:11:22.164 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:22.164 16:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.424 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.684 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:22.684 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:22.684 true 00:11:22.684 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:22.684 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.946 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.207 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:23.207 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:23.207 true 00:11:23.207 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:23.207 16:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.468 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.728 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:23.728 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:23.728 true 00:11:23.728 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:23.728 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.988 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.988 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:24.249 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:24.249 true 00:11:24.249 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:24.249 16:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.510 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.510 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:24.510 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:24.770 true 00:11:24.770 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:24.770 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.031 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.031 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:25.031 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:25.292 true 00:11:25.292 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:25.292 16:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.235 16:19:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.235 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:26.235 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:26.495 true 00:11:26.495 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:26.495 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.756 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.756 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:26.756 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:27.016 true 00:11:27.016 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:27.016 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.276 16:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.276 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:27.276 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:27.537 true 00:11:27.537 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:27.537 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.797 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.797 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:27.797 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:28.058 true 00:11:28.058 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:28.058 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.058 16:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.319 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:28.319 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:28.613 true 00:11:28.613 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:28.613 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.613 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.874 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:28.874 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:28.874 true 00:11:28.874 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:28.874 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.134 16:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.394 16:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:29.394 16:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:29.394 true 00:11:29.394 16:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:29.394 16:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:30.336 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:30.597 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:30.597 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:30.597 true 00:11:30.597 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:30.597 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.857 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.119 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:31.119 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:31.119 true 00:11:31.119 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:31.119 16:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.380 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.641 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:31.641 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:31.641 true 00:11:31.641 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:31.641 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.901 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.901 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:31.901 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:32.162 true 00:11:32.162 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:32.162 16:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.423 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.423 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:32.423 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:32.683 true 00:11:32.683 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:32.683 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.944 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.944 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:32.944 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:33.204 true 00:11:33.204 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:33.204 16:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.464 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.464 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:33.464 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:33.724 true 00:11:33.724 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:33.724 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.984 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.984 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:33.984 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:34.244 true 00:11:34.244 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:34.244 16:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.244 Initializing NVMe Controllers 00:11:34.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.244 Controller IO queue size 128, less than required. 00:11:34.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:34.244 Controller IO queue size 128, less than required. 00:11:34.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:34.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:34.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:34.244 Initialization complete. Launching workers. 00:11:34.244 ======================================================== 00:11:34.244 Latency(us) 00:11:34.244 Device Information : IOPS MiB/s Average min max 00:11:34.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1609.36 0.79 20974.83 1851.23 1114476.85 00:11:34.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7789.50 3.80 16379.58 2446.83 494066.44 00:11:34.244 ======================================================== 00:11:34.244 Total : 9398.86 4.59 17166.43 1851.23 1114476.85 00:11:34.244 00:11:34.504 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.504 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:34.504 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:34.764 true 00:11:34.764 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2968093 00:11:34.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2968093) - No such process 00:11:34.764 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2968093 00:11:34.764 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.764 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:35.024 null0 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.024 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:35.285 null1 00:11:35.285 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:35.285 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.285 16:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:35.545 null2 00:11:35.545 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:35.545 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.545 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:35.545 null3 00:11:35.545 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:35.545 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.545 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:35.806 null4 00:11:35.806 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:35.806 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.806 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:35.806 null5 00:11:35.806 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:35.806 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:35.806 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:36.067 null6 00:11:36.067 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:36.067 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:36.067 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:36.327 null7 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.327 16:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:36.327 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2974663 2974665 2974669 2974671 2974676 2974680 2974681 2974684 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:36.328 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:36.589 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.590 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:36.850 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:37.110 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.371 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:37.371 16:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:37.371 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:37.632 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:37.893 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.154 16:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:38.415 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:38.676 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:38.936 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.196 16:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:39.196 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.196 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:39.196 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:39.196 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:39.196 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.455 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.716 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.977 rmmod nvme_tcp 00:11:39.977 rmmod nvme_fabrics 00:11:39.977 rmmod nvme_keyring 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2967672 ']' 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2967672 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2967672 ']' 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2967672 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2967672 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2967672' 00:11:39.977 killing process with pid 2967672 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2967672 00:11:39.977 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2967672 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.238 16:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.161 16:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:42.161 00:11:42.161 real 0m48.021s 00:11:42.161 user 3m12.512s 00:11:42.161 sys 0m15.565s 00:11:42.161 16:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:42.161 16:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.161 ************************************ 00:11:42.161 END TEST nvmf_ns_hotplug_stress 00:11:42.161 ************************************ 00:11:42.161 16:20:08 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:42.161 16:20:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:42.161 16:20:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:42.161 16:20:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.161 ************************************ 00:11:42.161 START TEST nvmf_connect_stress 00:11:42.161 ************************************ 00:11:42.161 16:20:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:42.422 * Looking for test storage... 00:11:42.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.422 16:20:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.019 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:49.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:49.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:49.020 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:49.020 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.020 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:11:49.281 00:11:49.281 --- 10.0.0.2 ping statistics --- 00:11:49.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.281 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:11:49.281 00:11:49.281 --- 10.0.0.1 ping statistics --- 00:11:49.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.281 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2979712 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2979712 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2979712 ']' 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:49.281 16:20:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.281 [2024-06-07 16:20:16.040955] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:11:49.281 [2024-06-07 16:20:16.041006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.281 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.281 [2024-06-07 16:20:16.125687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.542 [2024-06-07 16:20:16.218451] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.542 [2024-06-07 16:20:16.218510] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.542 [2024-06-07 16:20:16.218518] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.542 [2024-06-07 16:20:16.218525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.542 [2024-06-07 16:20:16.218531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.542 [2024-06-07 16:20:16.218661] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.542 [2024-06-07 16:20:16.218967] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.542 [2024-06-07 16:20:16.218968] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 [2024-06-07 16:20:16.868036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 [2024-06-07 16:20:16.892398] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.114 NULL1 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2979900 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.114 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.115 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.115 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.375 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.636 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.636 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:50.636 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.636 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.636 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.896 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.897 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:50.897 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.897 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.897 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.158 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:51.158 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:51.158 16:20:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.158 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:51.158 16:20:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.728 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:51.728 16:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:51.728 16:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.728 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:51.728 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.989 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:51.989 16:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:51.989 16:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.989 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:51.989 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.288 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:52.288 16:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:52.288 16:20:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.288 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:52.288 16:20:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.571 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:52.571 16:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:52.571 16:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.571 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:52.571 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.831 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:52.831 16:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:52.831 16:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.831 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:52.831 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.091 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.091 16:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:53.091 16:20:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.091 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.091 16:20:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.661 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.661 16:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:53.661 16:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.661 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.662 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.922 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.922 16:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:53.922 16:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.922 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.922 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.183 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.183 16:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:54.183 16:20:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.183 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.183 16:20:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.443 16:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:54.443 16:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.443 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.443 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.014 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.014 16:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:55.014 16:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.014 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.014 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.276 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.276 16:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:55.276 16:20:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.276 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.276 16:20:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.538 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.538 16:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:55.538 16:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.538 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.538 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.799 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.799 16:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:55.799 16:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.799 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.799 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.059 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.059 16:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:56.059 16:20:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.059 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.059 16:20:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.350 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.350 16:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:56.350 16:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.350 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.350 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.924 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.924 16:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:56.924 16:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.925 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.925 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.185 16:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:57.185 16:20:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.185 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.185 16:20:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.446 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.446 16:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:57.446 16:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.446 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.446 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.706 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.706 16:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:57.706 16:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.706 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.706 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.966 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.966 16:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:57.966 16:20:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.966 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.966 16:20:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.537 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.537 16:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:58.537 16:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.537 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.537 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.797 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.798 16:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:58.798 16:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.798 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.798 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.060 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.060 16:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:59.060 16:20:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.060 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.060 16:20:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.320 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.320 16:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:59.320 16:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.320 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.320 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.890 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:59.890 16:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:11:59.890 16:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.890 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:59.890 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.151 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.151 16:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:12:00.151 16:20:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.151 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.151 16:20:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.412 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2979900 00:12:00.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2979900) - No such process 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2979900 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.412 rmmod nvme_tcp 00:12:00.412 rmmod nvme_fabrics 00:12:00.412 rmmod nvme_keyring 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2979712 ']' 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2979712 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2979712 ']' 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2979712 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2979712 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2979712' 00:12:00.412 killing process with pid 2979712 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2979712 00:12:00.412 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2979712 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.673 16:20:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.586 16:20:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.586 00:12:02.586 real 0m20.418s 00:12:02.586 user 0m42.103s 00:12:02.586 sys 0m8.315s 00:12:02.586 16:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:02.586 16:20:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.586 ************************************ 00:12:02.586 END TEST nvmf_connect_stress 00:12:02.586 ************************************ 00:12:02.848 16:20:29 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:02.848 16:20:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:02.848 16:20:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:02.848 16:20:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.848 ************************************ 00:12:02.848 START TEST nvmf_fused_ordering 00:12:02.848 ************************************ 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:02.848 * Looking for test storage... 00:12:02.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:02.848 16:20:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:10.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:10.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:10.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:10.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.992 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:12:10.993 00:12:10.993 --- 10.0.0.2 ping statistics --- 00:12:10.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.993 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:12:10.993 00:12:10.993 --- 10.0.0.1 ping statistics --- 00:12:10.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.993 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2986091 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2986091 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2986091 ']' 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:10.993 16:20:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 [2024-06-07 16:20:36.720351] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:12:10.993 [2024-06-07 16:20:36.720396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.993 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.993 [2024-06-07 16:20:36.797680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.993 [2024-06-07 16:20:36.890796] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.993 [2024-06-07 16:20:36.890849] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.993 [2024-06-07 16:20:36.890857] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.993 [2024-06-07 16:20:36.890864] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.993 [2024-06-07 16:20:36.890870] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.993 [2024-06-07 16:20:36.890895] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 [2024-06-07 16:20:37.569961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 [2024-06-07 16:20:37.594189] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 NULL1 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.993 16:20:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:10.993 [2024-06-07 16:20:37.663169] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:12:10.993 [2024-06-07 16:20:37.663211] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986279 ] 00:12:10.993 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.564 Attached to nqn.2016-06.io.spdk:cnode1 00:12:11.564 Namespace ID: 1 size: 1GB 00:12:11.564 fused_ordering(0) 00:12:11.564 fused_ordering(1) 00:12:11.564 fused_ordering(2) 00:12:11.564 fused_ordering(3) 00:12:11.564 fused_ordering(4) 00:12:11.564 fused_ordering(5) 00:12:11.564 fused_ordering(6) 00:12:11.564 fused_ordering(7) 00:12:11.564 fused_ordering(8) 00:12:11.564 fused_ordering(9) 00:12:11.564 fused_ordering(10) 00:12:11.564 fused_ordering(11) 00:12:11.564 fused_ordering(12) 00:12:11.564 fused_ordering(13) 00:12:11.564 fused_ordering(14) 00:12:11.564 fused_ordering(15) 00:12:11.564 fused_ordering(16) 00:12:11.564 fused_ordering(17) 00:12:11.564 fused_ordering(18) 00:12:11.564 fused_ordering(19) 00:12:11.564 fused_ordering(20) 00:12:11.564 fused_ordering(21) 00:12:11.564 fused_ordering(22) 00:12:11.564 fused_ordering(23) 00:12:11.564 fused_ordering(24) 00:12:11.564 fused_ordering(25) 00:12:11.564 fused_ordering(26) 00:12:11.564 fused_ordering(27) 00:12:11.564 fused_ordering(28) 00:12:11.564 fused_ordering(29) 00:12:11.564 fused_ordering(30) 00:12:11.564 fused_ordering(31) 00:12:11.564 fused_ordering(32) 00:12:11.564 fused_ordering(33) 00:12:11.564 fused_ordering(34) 00:12:11.564 fused_ordering(35) 00:12:11.564 fused_ordering(36) 00:12:11.564 fused_ordering(37) 00:12:11.564 fused_ordering(38) 00:12:11.564 fused_ordering(39) 00:12:11.564 fused_ordering(40) 00:12:11.564 fused_ordering(41) 00:12:11.564 fused_ordering(42) 00:12:11.564 fused_ordering(43) 00:12:11.564 fused_ordering(44) 00:12:11.564 fused_ordering(45) 00:12:11.564 fused_ordering(46) 00:12:11.564 fused_ordering(47) 00:12:11.564 fused_ordering(48) 00:12:11.564 fused_ordering(49) 00:12:11.564 fused_ordering(50) 00:12:11.564 fused_ordering(51) 00:12:11.564 fused_ordering(52) 00:12:11.564 fused_ordering(53) 00:12:11.564 fused_ordering(54) 00:12:11.564 fused_ordering(55) 00:12:11.564 fused_ordering(56) 00:12:11.564 fused_ordering(57) 00:12:11.564 fused_ordering(58) 00:12:11.564 fused_ordering(59) 00:12:11.564 fused_ordering(60) 00:12:11.564 fused_ordering(61) 00:12:11.564 fused_ordering(62) 00:12:11.564 fused_ordering(63) 00:12:11.564 fused_ordering(64) 00:12:11.564 fused_ordering(65) 00:12:11.564 fused_ordering(66) 00:12:11.564 fused_ordering(67) 00:12:11.564 fused_ordering(68) 00:12:11.564 fused_ordering(69) 00:12:11.564 fused_ordering(70) 00:12:11.564 fused_ordering(71) 00:12:11.564 fused_ordering(72) 00:12:11.564 fused_ordering(73) 00:12:11.564 fused_ordering(74) 00:12:11.564 fused_ordering(75) 00:12:11.564 fused_ordering(76) 00:12:11.564 fused_ordering(77) 00:12:11.564 fused_ordering(78) 00:12:11.564 fused_ordering(79) 00:12:11.564 fused_ordering(80) 00:12:11.564 fused_ordering(81) 00:12:11.564 fused_ordering(82) 00:12:11.564 fused_ordering(83) 00:12:11.564 fused_ordering(84) 00:12:11.564 fused_ordering(85) 00:12:11.564 fused_ordering(86) 00:12:11.564 fused_ordering(87) 00:12:11.564 fused_ordering(88) 00:12:11.564 fused_ordering(89) 00:12:11.564 fused_ordering(90) 00:12:11.564 fused_ordering(91) 00:12:11.564 fused_ordering(92) 00:12:11.564 fused_ordering(93) 00:12:11.564 fused_ordering(94) 00:12:11.564 fused_ordering(95) 00:12:11.564 fused_ordering(96) 00:12:11.564 fused_ordering(97) 00:12:11.564 fused_ordering(98) 00:12:11.564 fused_ordering(99) 00:12:11.564 fused_ordering(100) 00:12:11.564 fused_ordering(101) 00:12:11.564 fused_ordering(102) 00:12:11.564 fused_ordering(103) 00:12:11.564 fused_ordering(104) 00:12:11.564 fused_ordering(105) 00:12:11.564 fused_ordering(106) 00:12:11.564 fused_ordering(107) 00:12:11.564 fused_ordering(108) 00:12:11.564 fused_ordering(109) 00:12:11.564 fused_ordering(110) 00:12:11.564 fused_ordering(111) 00:12:11.564 fused_ordering(112) 00:12:11.564 fused_ordering(113) 00:12:11.564 fused_ordering(114) 00:12:11.564 fused_ordering(115) 00:12:11.564 fused_ordering(116) 00:12:11.564 fused_ordering(117) 00:12:11.564 fused_ordering(118) 00:12:11.564 fused_ordering(119) 00:12:11.564 fused_ordering(120) 00:12:11.564 fused_ordering(121) 00:12:11.564 fused_ordering(122) 00:12:11.564 fused_ordering(123) 00:12:11.564 fused_ordering(124) 00:12:11.564 fused_ordering(125) 00:12:11.564 fused_ordering(126) 00:12:11.564 fused_ordering(127) 00:12:11.564 fused_ordering(128) 00:12:11.564 fused_ordering(129) 00:12:11.564 fused_ordering(130) 00:12:11.564 fused_ordering(131) 00:12:11.564 fused_ordering(132) 00:12:11.564 fused_ordering(133) 00:12:11.564 fused_ordering(134) 00:12:11.564 fused_ordering(135) 00:12:11.564 fused_ordering(136) 00:12:11.564 fused_ordering(137) 00:12:11.564 fused_ordering(138) 00:12:11.564 fused_ordering(139) 00:12:11.564 fused_ordering(140) 00:12:11.564 fused_ordering(141) 00:12:11.564 fused_ordering(142) 00:12:11.564 fused_ordering(143) 00:12:11.564 fused_ordering(144) 00:12:11.564 fused_ordering(145) 00:12:11.564 fused_ordering(146) 00:12:11.564 fused_ordering(147) 00:12:11.564 fused_ordering(148) 00:12:11.564 fused_ordering(149) 00:12:11.564 fused_ordering(150) 00:12:11.564 fused_ordering(151) 00:12:11.564 fused_ordering(152) 00:12:11.564 fused_ordering(153) 00:12:11.564 fused_ordering(154) 00:12:11.564 fused_ordering(155) 00:12:11.564 fused_ordering(156) 00:12:11.564 fused_ordering(157) 00:12:11.564 fused_ordering(158) 00:12:11.564 fused_ordering(159) 00:12:11.564 fused_ordering(160) 00:12:11.564 fused_ordering(161) 00:12:11.564 fused_ordering(162) 00:12:11.564 fused_ordering(163) 00:12:11.564 fused_ordering(164) 00:12:11.564 fused_ordering(165) 00:12:11.564 fused_ordering(166) 00:12:11.564 fused_ordering(167) 00:12:11.564 fused_ordering(168) 00:12:11.564 fused_ordering(169) 00:12:11.564 fused_ordering(170) 00:12:11.564 fused_ordering(171) 00:12:11.564 fused_ordering(172) 00:12:11.564 fused_ordering(173) 00:12:11.564 fused_ordering(174) 00:12:11.564 fused_ordering(175) 00:12:11.564 fused_ordering(176) 00:12:11.564 fused_ordering(177) 00:12:11.564 fused_ordering(178) 00:12:11.564 fused_ordering(179) 00:12:11.564 fused_ordering(180) 00:12:11.564 fused_ordering(181) 00:12:11.564 fused_ordering(182) 00:12:11.565 fused_ordering(183) 00:12:11.565 fused_ordering(184) 00:12:11.565 fused_ordering(185) 00:12:11.565 fused_ordering(186) 00:12:11.565 fused_ordering(187) 00:12:11.565 fused_ordering(188) 00:12:11.565 fused_ordering(189) 00:12:11.565 fused_ordering(190) 00:12:11.565 fused_ordering(191) 00:12:11.565 fused_ordering(192) 00:12:11.565 fused_ordering(193) 00:12:11.565 fused_ordering(194) 00:12:11.565 fused_ordering(195) 00:12:11.565 fused_ordering(196) 00:12:11.565 fused_ordering(197) 00:12:11.565 fused_ordering(198) 00:12:11.565 fused_ordering(199) 00:12:11.565 fused_ordering(200) 00:12:11.565 fused_ordering(201) 00:12:11.565 fused_ordering(202) 00:12:11.565 fused_ordering(203) 00:12:11.565 fused_ordering(204) 00:12:11.565 fused_ordering(205) 00:12:11.826 fused_ordering(206) 00:12:11.826 fused_ordering(207) 00:12:11.826 fused_ordering(208) 00:12:11.826 fused_ordering(209) 00:12:11.826 fused_ordering(210) 00:12:11.826 fused_ordering(211) 00:12:11.826 fused_ordering(212) 00:12:11.826 fused_ordering(213) 00:12:11.826 fused_ordering(214) 00:12:11.826 fused_ordering(215) 00:12:11.826 fused_ordering(216) 00:12:11.826 fused_ordering(217) 00:12:11.826 fused_ordering(218) 00:12:11.826 fused_ordering(219) 00:12:11.826 fused_ordering(220) 00:12:11.826 fused_ordering(221) 00:12:11.826 fused_ordering(222) 00:12:11.826 fused_ordering(223) 00:12:11.826 fused_ordering(224) 00:12:11.826 fused_ordering(225) 00:12:11.826 fused_ordering(226) 00:12:11.826 fused_ordering(227) 00:12:11.826 fused_ordering(228) 00:12:11.826 fused_ordering(229) 00:12:11.826 fused_ordering(230) 00:12:11.826 fused_ordering(231) 00:12:11.826 fused_ordering(232) 00:12:11.826 fused_ordering(233) 00:12:11.826 fused_ordering(234) 00:12:11.826 fused_ordering(235) 00:12:11.826 fused_ordering(236) 00:12:11.826 fused_ordering(237) 00:12:11.826 fused_ordering(238) 00:12:11.826 fused_ordering(239) 00:12:11.826 fused_ordering(240) 00:12:11.826 fused_ordering(241) 00:12:11.826 fused_ordering(242) 00:12:11.826 fused_ordering(243) 00:12:11.826 fused_ordering(244) 00:12:11.826 fused_ordering(245) 00:12:11.826 fused_ordering(246) 00:12:11.826 fused_ordering(247) 00:12:11.826 fused_ordering(248) 00:12:11.826 fused_ordering(249) 00:12:11.826 fused_ordering(250) 00:12:11.826 fused_ordering(251) 00:12:11.826 fused_ordering(252) 00:12:11.826 fused_ordering(253) 00:12:11.826 fused_ordering(254) 00:12:11.826 fused_ordering(255) 00:12:11.826 fused_ordering(256) 00:12:11.826 fused_ordering(257) 00:12:11.826 fused_ordering(258) 00:12:11.826 fused_ordering(259) 00:12:11.826 fused_ordering(260) 00:12:11.826 fused_ordering(261) 00:12:11.826 fused_ordering(262) 00:12:11.826 fused_ordering(263) 00:12:11.826 fused_ordering(264) 00:12:11.826 fused_ordering(265) 00:12:11.826 fused_ordering(266) 00:12:11.826 fused_ordering(267) 00:12:11.826 fused_ordering(268) 00:12:11.826 fused_ordering(269) 00:12:11.826 fused_ordering(270) 00:12:11.826 fused_ordering(271) 00:12:11.826 fused_ordering(272) 00:12:11.826 fused_ordering(273) 00:12:11.826 fused_ordering(274) 00:12:11.826 fused_ordering(275) 00:12:11.826 fused_ordering(276) 00:12:11.826 fused_ordering(277) 00:12:11.826 fused_ordering(278) 00:12:11.826 fused_ordering(279) 00:12:11.826 fused_ordering(280) 00:12:11.826 fused_ordering(281) 00:12:11.826 fused_ordering(282) 00:12:11.826 fused_ordering(283) 00:12:11.826 fused_ordering(284) 00:12:11.826 fused_ordering(285) 00:12:11.826 fused_ordering(286) 00:12:11.826 fused_ordering(287) 00:12:11.826 fused_ordering(288) 00:12:11.826 fused_ordering(289) 00:12:11.826 fused_ordering(290) 00:12:11.826 fused_ordering(291) 00:12:11.826 fused_ordering(292) 00:12:11.826 fused_ordering(293) 00:12:11.826 fused_ordering(294) 00:12:11.826 fused_ordering(295) 00:12:11.826 fused_ordering(296) 00:12:11.826 fused_ordering(297) 00:12:11.826 fused_ordering(298) 00:12:11.826 fused_ordering(299) 00:12:11.826 fused_ordering(300) 00:12:11.826 fused_ordering(301) 00:12:11.826 fused_ordering(302) 00:12:11.826 fused_ordering(303) 00:12:11.826 fused_ordering(304) 00:12:11.826 fused_ordering(305) 00:12:11.826 fused_ordering(306) 00:12:11.826 fused_ordering(307) 00:12:11.826 fused_ordering(308) 00:12:11.826 fused_ordering(309) 00:12:11.826 fused_ordering(310) 00:12:11.826 fused_ordering(311) 00:12:11.826 fused_ordering(312) 00:12:11.826 fused_ordering(313) 00:12:11.826 fused_ordering(314) 00:12:11.826 fused_ordering(315) 00:12:11.826 fused_ordering(316) 00:12:11.826 fused_ordering(317) 00:12:11.826 fused_ordering(318) 00:12:11.826 fused_ordering(319) 00:12:11.826 fused_ordering(320) 00:12:11.826 fused_ordering(321) 00:12:11.826 fused_ordering(322) 00:12:11.826 fused_ordering(323) 00:12:11.826 fused_ordering(324) 00:12:11.826 fused_ordering(325) 00:12:11.826 fused_ordering(326) 00:12:11.826 fused_ordering(327) 00:12:11.826 fused_ordering(328) 00:12:11.826 fused_ordering(329) 00:12:11.826 fused_ordering(330) 00:12:11.826 fused_ordering(331) 00:12:11.826 fused_ordering(332) 00:12:11.826 fused_ordering(333) 00:12:11.826 fused_ordering(334) 00:12:11.826 fused_ordering(335) 00:12:11.826 fused_ordering(336) 00:12:11.826 fused_ordering(337) 00:12:11.826 fused_ordering(338) 00:12:11.826 fused_ordering(339) 00:12:11.826 fused_ordering(340) 00:12:11.826 fused_ordering(341) 00:12:11.826 fused_ordering(342) 00:12:11.826 fused_ordering(343) 00:12:11.826 fused_ordering(344) 00:12:11.826 fused_ordering(345) 00:12:11.826 fused_ordering(346) 00:12:11.826 fused_ordering(347) 00:12:11.826 fused_ordering(348) 00:12:11.826 fused_ordering(349) 00:12:11.826 fused_ordering(350) 00:12:11.826 fused_ordering(351) 00:12:11.826 fused_ordering(352) 00:12:11.826 fused_ordering(353) 00:12:11.826 fused_ordering(354) 00:12:11.826 fused_ordering(355) 00:12:11.826 fused_ordering(356) 00:12:11.826 fused_ordering(357) 00:12:11.826 fused_ordering(358) 00:12:11.826 fused_ordering(359) 00:12:11.826 fused_ordering(360) 00:12:11.826 fused_ordering(361) 00:12:11.826 fused_ordering(362) 00:12:11.826 fused_ordering(363) 00:12:11.826 fused_ordering(364) 00:12:11.826 fused_ordering(365) 00:12:11.826 fused_ordering(366) 00:12:11.826 fused_ordering(367) 00:12:11.826 fused_ordering(368) 00:12:11.826 fused_ordering(369) 00:12:11.826 fused_ordering(370) 00:12:11.826 fused_ordering(371) 00:12:11.826 fused_ordering(372) 00:12:11.826 fused_ordering(373) 00:12:11.826 fused_ordering(374) 00:12:11.826 fused_ordering(375) 00:12:11.826 fused_ordering(376) 00:12:11.826 fused_ordering(377) 00:12:11.826 fused_ordering(378) 00:12:11.826 fused_ordering(379) 00:12:11.826 fused_ordering(380) 00:12:11.826 fused_ordering(381) 00:12:11.826 fused_ordering(382) 00:12:11.826 fused_ordering(383) 00:12:11.827 fused_ordering(384) 00:12:11.827 fused_ordering(385) 00:12:11.827 fused_ordering(386) 00:12:11.827 fused_ordering(387) 00:12:11.827 fused_ordering(388) 00:12:11.827 fused_ordering(389) 00:12:11.827 fused_ordering(390) 00:12:11.827 fused_ordering(391) 00:12:11.827 fused_ordering(392) 00:12:11.827 fused_ordering(393) 00:12:11.827 fused_ordering(394) 00:12:11.827 fused_ordering(395) 00:12:11.827 fused_ordering(396) 00:12:11.827 fused_ordering(397) 00:12:11.827 fused_ordering(398) 00:12:11.827 fused_ordering(399) 00:12:11.827 fused_ordering(400) 00:12:11.827 fused_ordering(401) 00:12:11.827 fused_ordering(402) 00:12:11.827 fused_ordering(403) 00:12:11.827 fused_ordering(404) 00:12:11.827 fused_ordering(405) 00:12:11.827 fused_ordering(406) 00:12:11.827 fused_ordering(407) 00:12:11.827 fused_ordering(408) 00:12:11.827 fused_ordering(409) 00:12:11.827 fused_ordering(410) 00:12:12.398 fused_ordering(411) 00:12:12.398 fused_ordering(412) 00:12:12.398 fused_ordering(413) 00:12:12.398 fused_ordering(414) 00:12:12.398 fused_ordering(415) 00:12:12.398 fused_ordering(416) 00:12:12.398 fused_ordering(417) 00:12:12.398 fused_ordering(418) 00:12:12.398 fused_ordering(419) 00:12:12.398 fused_ordering(420) 00:12:12.398 fused_ordering(421) 00:12:12.398 fused_ordering(422) 00:12:12.398 fused_ordering(423) 00:12:12.398 fused_ordering(424) 00:12:12.398 fused_ordering(425) 00:12:12.398 fused_ordering(426) 00:12:12.398 fused_ordering(427) 00:12:12.398 fused_ordering(428) 00:12:12.398 fused_ordering(429) 00:12:12.398 fused_ordering(430) 00:12:12.398 fused_ordering(431) 00:12:12.398 fused_ordering(432) 00:12:12.398 fused_ordering(433) 00:12:12.398 fused_ordering(434) 00:12:12.398 fused_ordering(435) 00:12:12.398 fused_ordering(436) 00:12:12.398 fused_ordering(437) 00:12:12.398 fused_ordering(438) 00:12:12.398 fused_ordering(439) 00:12:12.398 fused_ordering(440) 00:12:12.398 fused_ordering(441) 00:12:12.398 fused_ordering(442) 00:12:12.398 fused_ordering(443) 00:12:12.398 fused_ordering(444) 00:12:12.398 fused_ordering(445) 00:12:12.398 fused_ordering(446) 00:12:12.398 fused_ordering(447) 00:12:12.398 fused_ordering(448) 00:12:12.398 fused_ordering(449) 00:12:12.398 fused_ordering(450) 00:12:12.398 fused_ordering(451) 00:12:12.398 fused_ordering(452) 00:12:12.398 fused_ordering(453) 00:12:12.398 fused_ordering(454) 00:12:12.398 fused_ordering(455) 00:12:12.398 fused_ordering(456) 00:12:12.398 fused_ordering(457) 00:12:12.398 fused_ordering(458) 00:12:12.398 fused_ordering(459) 00:12:12.398 fused_ordering(460) 00:12:12.398 fused_ordering(461) 00:12:12.398 fused_ordering(462) 00:12:12.398 fused_ordering(463) 00:12:12.398 fused_ordering(464) 00:12:12.398 fused_ordering(465) 00:12:12.398 fused_ordering(466) 00:12:12.398 fused_ordering(467) 00:12:12.398 fused_ordering(468) 00:12:12.398 fused_ordering(469) 00:12:12.398 fused_ordering(470) 00:12:12.398 fused_ordering(471) 00:12:12.398 fused_ordering(472) 00:12:12.398 fused_ordering(473) 00:12:12.398 fused_ordering(474) 00:12:12.398 fused_ordering(475) 00:12:12.398 fused_ordering(476) 00:12:12.398 fused_ordering(477) 00:12:12.398 fused_ordering(478) 00:12:12.398 fused_ordering(479) 00:12:12.398 fused_ordering(480) 00:12:12.398 fused_ordering(481) 00:12:12.398 fused_ordering(482) 00:12:12.398 fused_ordering(483) 00:12:12.398 fused_ordering(484) 00:12:12.398 fused_ordering(485) 00:12:12.398 fused_ordering(486) 00:12:12.398 fused_ordering(487) 00:12:12.398 fused_ordering(488) 00:12:12.398 fused_ordering(489) 00:12:12.398 fused_ordering(490) 00:12:12.398 fused_ordering(491) 00:12:12.398 fused_ordering(492) 00:12:12.398 fused_ordering(493) 00:12:12.398 fused_ordering(494) 00:12:12.398 fused_ordering(495) 00:12:12.398 fused_ordering(496) 00:12:12.398 fused_ordering(497) 00:12:12.398 fused_ordering(498) 00:12:12.398 fused_ordering(499) 00:12:12.398 fused_ordering(500) 00:12:12.398 fused_ordering(501) 00:12:12.398 fused_ordering(502) 00:12:12.398 fused_ordering(503) 00:12:12.398 fused_ordering(504) 00:12:12.398 fused_ordering(505) 00:12:12.398 fused_ordering(506) 00:12:12.398 fused_ordering(507) 00:12:12.398 fused_ordering(508) 00:12:12.398 fused_ordering(509) 00:12:12.398 fused_ordering(510) 00:12:12.398 fused_ordering(511) 00:12:12.398 fused_ordering(512) 00:12:12.398 fused_ordering(513) 00:12:12.398 fused_ordering(514) 00:12:12.398 fused_ordering(515) 00:12:12.398 fused_ordering(516) 00:12:12.398 fused_ordering(517) 00:12:12.398 fused_ordering(518) 00:12:12.398 fused_ordering(519) 00:12:12.398 fused_ordering(520) 00:12:12.398 fused_ordering(521) 00:12:12.398 fused_ordering(522) 00:12:12.398 fused_ordering(523) 00:12:12.398 fused_ordering(524) 00:12:12.398 fused_ordering(525) 00:12:12.398 fused_ordering(526) 00:12:12.398 fused_ordering(527) 00:12:12.398 fused_ordering(528) 00:12:12.398 fused_ordering(529) 00:12:12.398 fused_ordering(530) 00:12:12.398 fused_ordering(531) 00:12:12.398 fused_ordering(532) 00:12:12.398 fused_ordering(533) 00:12:12.398 fused_ordering(534) 00:12:12.398 fused_ordering(535) 00:12:12.398 fused_ordering(536) 00:12:12.398 fused_ordering(537) 00:12:12.398 fused_ordering(538) 00:12:12.398 fused_ordering(539) 00:12:12.398 fused_ordering(540) 00:12:12.398 fused_ordering(541) 00:12:12.398 fused_ordering(542) 00:12:12.398 fused_ordering(543) 00:12:12.398 fused_ordering(544) 00:12:12.398 fused_ordering(545) 00:12:12.398 fused_ordering(546) 00:12:12.398 fused_ordering(547) 00:12:12.398 fused_ordering(548) 00:12:12.398 fused_ordering(549) 00:12:12.398 fused_ordering(550) 00:12:12.399 fused_ordering(551) 00:12:12.399 fused_ordering(552) 00:12:12.399 fused_ordering(553) 00:12:12.399 fused_ordering(554) 00:12:12.399 fused_ordering(555) 00:12:12.399 fused_ordering(556) 00:12:12.399 fused_ordering(557) 00:12:12.399 fused_ordering(558) 00:12:12.399 fused_ordering(559) 00:12:12.399 fused_ordering(560) 00:12:12.399 fused_ordering(561) 00:12:12.399 fused_ordering(562) 00:12:12.399 fused_ordering(563) 00:12:12.399 fused_ordering(564) 00:12:12.399 fused_ordering(565) 00:12:12.399 fused_ordering(566) 00:12:12.399 fused_ordering(567) 00:12:12.399 fused_ordering(568) 00:12:12.399 fused_ordering(569) 00:12:12.399 fused_ordering(570) 00:12:12.399 fused_ordering(571) 00:12:12.399 fused_ordering(572) 00:12:12.399 fused_ordering(573) 00:12:12.399 fused_ordering(574) 00:12:12.399 fused_ordering(575) 00:12:12.399 fused_ordering(576) 00:12:12.399 fused_ordering(577) 00:12:12.399 fused_ordering(578) 00:12:12.399 fused_ordering(579) 00:12:12.399 fused_ordering(580) 00:12:12.399 fused_ordering(581) 00:12:12.399 fused_ordering(582) 00:12:12.399 fused_ordering(583) 00:12:12.399 fused_ordering(584) 00:12:12.399 fused_ordering(585) 00:12:12.399 fused_ordering(586) 00:12:12.399 fused_ordering(587) 00:12:12.399 fused_ordering(588) 00:12:12.399 fused_ordering(589) 00:12:12.399 fused_ordering(590) 00:12:12.399 fused_ordering(591) 00:12:12.399 fused_ordering(592) 00:12:12.399 fused_ordering(593) 00:12:12.399 fused_ordering(594) 00:12:12.399 fused_ordering(595) 00:12:12.399 fused_ordering(596) 00:12:12.399 fused_ordering(597) 00:12:12.399 fused_ordering(598) 00:12:12.399 fused_ordering(599) 00:12:12.399 fused_ordering(600) 00:12:12.399 fused_ordering(601) 00:12:12.399 fused_ordering(602) 00:12:12.399 fused_ordering(603) 00:12:12.399 fused_ordering(604) 00:12:12.399 fused_ordering(605) 00:12:12.399 fused_ordering(606) 00:12:12.399 fused_ordering(607) 00:12:12.399 fused_ordering(608) 00:12:12.399 fused_ordering(609) 00:12:12.399 fused_ordering(610) 00:12:12.399 fused_ordering(611) 00:12:12.399 fused_ordering(612) 00:12:12.399 fused_ordering(613) 00:12:12.399 fused_ordering(614) 00:12:12.399 fused_ordering(615) 00:12:12.969 fused_ordering(616) 00:12:12.969 fused_ordering(617) 00:12:12.969 fused_ordering(618) 00:12:12.969 fused_ordering(619) 00:12:12.969 fused_ordering(620) 00:12:12.969 fused_ordering(621) 00:12:12.969 fused_ordering(622) 00:12:12.969 fused_ordering(623) 00:12:12.969 fused_ordering(624) 00:12:12.969 fused_ordering(625) 00:12:12.969 fused_ordering(626) 00:12:12.969 fused_ordering(627) 00:12:12.969 fused_ordering(628) 00:12:12.969 fused_ordering(629) 00:12:12.969 fused_ordering(630) 00:12:12.969 fused_ordering(631) 00:12:12.969 fused_ordering(632) 00:12:12.969 fused_ordering(633) 00:12:12.969 fused_ordering(634) 00:12:12.969 fused_ordering(635) 00:12:12.969 fused_ordering(636) 00:12:12.969 fused_ordering(637) 00:12:12.969 fused_ordering(638) 00:12:12.969 fused_ordering(639) 00:12:12.969 fused_ordering(640) 00:12:12.969 fused_ordering(641) 00:12:12.969 fused_ordering(642) 00:12:12.969 fused_ordering(643) 00:12:12.969 fused_ordering(644) 00:12:12.969 fused_ordering(645) 00:12:12.969 fused_ordering(646) 00:12:12.969 fused_ordering(647) 00:12:12.969 fused_ordering(648) 00:12:12.969 fused_ordering(649) 00:12:12.969 fused_ordering(650) 00:12:12.969 fused_ordering(651) 00:12:12.969 fused_ordering(652) 00:12:12.969 fused_ordering(653) 00:12:12.969 fused_ordering(654) 00:12:12.969 fused_ordering(655) 00:12:12.969 fused_ordering(656) 00:12:12.969 fused_ordering(657) 00:12:12.969 fused_ordering(658) 00:12:12.969 fused_ordering(659) 00:12:12.969 fused_ordering(660) 00:12:12.969 fused_ordering(661) 00:12:12.969 fused_ordering(662) 00:12:12.969 fused_ordering(663) 00:12:12.969 fused_ordering(664) 00:12:12.970 fused_ordering(665) 00:12:12.970 fused_ordering(666) 00:12:12.970 fused_ordering(667) 00:12:12.970 fused_ordering(668) 00:12:12.970 fused_ordering(669) 00:12:12.970 fused_ordering(670) 00:12:12.970 fused_ordering(671) 00:12:12.970 fused_ordering(672) 00:12:12.970 fused_ordering(673) 00:12:12.970 fused_ordering(674) 00:12:12.970 fused_ordering(675) 00:12:12.970 fused_ordering(676) 00:12:12.970 fused_ordering(677) 00:12:12.970 fused_ordering(678) 00:12:12.970 fused_ordering(679) 00:12:12.970 fused_ordering(680) 00:12:12.970 fused_ordering(681) 00:12:12.970 fused_ordering(682) 00:12:12.970 fused_ordering(683) 00:12:12.970 fused_ordering(684) 00:12:12.970 fused_ordering(685) 00:12:12.970 fused_ordering(686) 00:12:12.970 fused_ordering(687) 00:12:12.970 fused_ordering(688) 00:12:12.970 fused_ordering(689) 00:12:12.970 fused_ordering(690) 00:12:12.970 fused_ordering(691) 00:12:12.970 fused_ordering(692) 00:12:12.970 fused_ordering(693) 00:12:12.970 fused_ordering(694) 00:12:12.970 fused_ordering(695) 00:12:12.970 fused_ordering(696) 00:12:12.970 fused_ordering(697) 00:12:12.970 fused_ordering(698) 00:12:12.970 fused_ordering(699) 00:12:12.970 fused_ordering(700) 00:12:12.970 fused_ordering(701) 00:12:12.970 fused_ordering(702) 00:12:12.970 fused_ordering(703) 00:12:12.970 fused_ordering(704) 00:12:12.970 fused_ordering(705) 00:12:12.970 fused_ordering(706) 00:12:12.970 fused_ordering(707) 00:12:12.970 fused_ordering(708) 00:12:12.970 fused_ordering(709) 00:12:12.970 fused_ordering(710) 00:12:12.970 fused_ordering(711) 00:12:12.970 fused_ordering(712) 00:12:12.970 fused_ordering(713) 00:12:12.970 fused_ordering(714) 00:12:12.970 fused_ordering(715) 00:12:12.970 fused_ordering(716) 00:12:12.970 fused_ordering(717) 00:12:12.970 fused_ordering(718) 00:12:12.970 fused_ordering(719) 00:12:12.970 fused_ordering(720) 00:12:12.970 fused_ordering(721) 00:12:12.970 fused_ordering(722) 00:12:12.970 fused_ordering(723) 00:12:12.970 fused_ordering(724) 00:12:12.970 fused_ordering(725) 00:12:12.970 fused_ordering(726) 00:12:12.970 fused_ordering(727) 00:12:12.970 fused_ordering(728) 00:12:12.970 fused_ordering(729) 00:12:12.970 fused_ordering(730) 00:12:12.970 fused_ordering(731) 00:12:12.970 fused_ordering(732) 00:12:12.970 fused_ordering(733) 00:12:12.970 fused_ordering(734) 00:12:12.970 fused_ordering(735) 00:12:12.970 fused_ordering(736) 00:12:12.970 fused_ordering(737) 00:12:12.970 fused_ordering(738) 00:12:12.970 fused_ordering(739) 00:12:12.970 fused_ordering(740) 00:12:12.970 fused_ordering(741) 00:12:12.970 fused_ordering(742) 00:12:12.970 fused_ordering(743) 00:12:12.970 fused_ordering(744) 00:12:12.970 fused_ordering(745) 00:12:12.970 fused_ordering(746) 00:12:12.970 fused_ordering(747) 00:12:12.970 fused_ordering(748) 00:12:12.970 fused_ordering(749) 00:12:12.970 fused_ordering(750) 00:12:12.970 fused_ordering(751) 00:12:12.970 fused_ordering(752) 00:12:12.970 fused_ordering(753) 00:12:12.970 fused_ordering(754) 00:12:12.970 fused_ordering(755) 00:12:12.970 fused_ordering(756) 00:12:12.970 fused_ordering(757) 00:12:12.970 fused_ordering(758) 00:12:12.970 fused_ordering(759) 00:12:12.970 fused_ordering(760) 00:12:12.970 fused_ordering(761) 00:12:12.970 fused_ordering(762) 00:12:12.970 fused_ordering(763) 00:12:12.970 fused_ordering(764) 00:12:12.970 fused_ordering(765) 00:12:12.970 fused_ordering(766) 00:12:12.970 fused_ordering(767) 00:12:12.970 fused_ordering(768) 00:12:12.970 fused_ordering(769) 00:12:12.970 fused_ordering(770) 00:12:12.970 fused_ordering(771) 00:12:12.970 fused_ordering(772) 00:12:12.970 fused_ordering(773) 00:12:12.970 fused_ordering(774) 00:12:12.970 fused_ordering(775) 00:12:12.970 fused_ordering(776) 00:12:12.970 fused_ordering(777) 00:12:12.970 fused_ordering(778) 00:12:12.970 fused_ordering(779) 00:12:12.970 fused_ordering(780) 00:12:12.970 fused_ordering(781) 00:12:12.970 fused_ordering(782) 00:12:12.970 fused_ordering(783) 00:12:12.970 fused_ordering(784) 00:12:12.970 fused_ordering(785) 00:12:12.970 fused_ordering(786) 00:12:12.970 fused_ordering(787) 00:12:12.970 fused_ordering(788) 00:12:12.970 fused_ordering(789) 00:12:12.970 fused_ordering(790) 00:12:12.970 fused_ordering(791) 00:12:12.970 fused_ordering(792) 00:12:12.970 fused_ordering(793) 00:12:12.970 fused_ordering(794) 00:12:12.970 fused_ordering(795) 00:12:12.970 fused_ordering(796) 00:12:12.970 fused_ordering(797) 00:12:12.970 fused_ordering(798) 00:12:12.970 fused_ordering(799) 00:12:12.970 fused_ordering(800) 00:12:12.970 fused_ordering(801) 00:12:12.970 fused_ordering(802) 00:12:12.970 fused_ordering(803) 00:12:12.970 fused_ordering(804) 00:12:12.970 fused_ordering(805) 00:12:12.970 fused_ordering(806) 00:12:12.970 fused_ordering(807) 00:12:12.970 fused_ordering(808) 00:12:12.970 fused_ordering(809) 00:12:12.970 fused_ordering(810) 00:12:12.970 fused_ordering(811) 00:12:12.970 fused_ordering(812) 00:12:12.970 fused_ordering(813) 00:12:12.970 fused_ordering(814) 00:12:12.970 fused_ordering(815) 00:12:12.970 fused_ordering(816) 00:12:12.970 fused_ordering(817) 00:12:12.970 fused_ordering(818) 00:12:12.970 fused_ordering(819) 00:12:12.970 fused_ordering(820) 00:12:13.541 fused_ordering(821) 00:12:13.541 fused_ordering(822) 00:12:13.541 fused_ordering(823) 00:12:13.541 fused_ordering(824) 00:12:13.541 fused_ordering(825) 00:12:13.541 fused_ordering(826) 00:12:13.542 fused_ordering(827) 00:12:13.542 fused_ordering(828) 00:12:13.542 fused_ordering(829) 00:12:13.542 fused_ordering(830) 00:12:13.542 fused_ordering(831) 00:12:13.542 fused_ordering(832) 00:12:13.542 fused_ordering(833) 00:12:13.542 fused_ordering(834) 00:12:13.542 fused_ordering(835) 00:12:13.542 fused_ordering(836) 00:12:13.542 fused_ordering(837) 00:12:13.542 fused_ordering(838) 00:12:13.542 fused_ordering(839) 00:12:13.542 fused_ordering(840) 00:12:13.542 fused_ordering(841) 00:12:13.542 fused_ordering(842) 00:12:13.542 fused_ordering(843) 00:12:13.542 fused_ordering(844) 00:12:13.542 fused_ordering(845) 00:12:13.542 fused_ordering(846) 00:12:13.542 fused_ordering(847) 00:12:13.542 fused_ordering(848) 00:12:13.542 fused_ordering(849) 00:12:13.542 fused_ordering(850) 00:12:13.542 fused_ordering(851) 00:12:13.542 fused_ordering(852) 00:12:13.542 fused_ordering(853) 00:12:13.542 fused_ordering(854) 00:12:13.542 fused_ordering(855) 00:12:13.542 fused_ordering(856) 00:12:13.542 fused_ordering(857) 00:12:13.542 fused_ordering(858) 00:12:13.542 fused_ordering(859) 00:12:13.542 fused_ordering(860) 00:12:13.542 fused_ordering(861) 00:12:13.542 fused_ordering(862) 00:12:13.542 fused_ordering(863) 00:12:13.542 fused_ordering(864) 00:12:13.542 fused_ordering(865) 00:12:13.542 fused_ordering(866) 00:12:13.542 fused_ordering(867) 00:12:13.542 fused_ordering(868) 00:12:13.542 fused_ordering(869) 00:12:13.542 fused_ordering(870) 00:12:13.542 fused_ordering(871) 00:12:13.542 fused_ordering(872) 00:12:13.542 fused_ordering(873) 00:12:13.542 fused_ordering(874) 00:12:13.542 fused_ordering(875) 00:12:13.542 fused_ordering(876) 00:12:13.542 fused_ordering(877) 00:12:13.542 fused_ordering(878) 00:12:13.542 fused_ordering(879) 00:12:13.542 fused_ordering(880) 00:12:13.542 fused_ordering(881) 00:12:13.542 fused_ordering(882) 00:12:13.542 fused_ordering(883) 00:12:13.542 fused_ordering(884) 00:12:13.542 fused_ordering(885) 00:12:13.542 fused_ordering(886) 00:12:13.542 fused_ordering(887) 00:12:13.542 fused_ordering(888) 00:12:13.542 fused_ordering(889) 00:12:13.542 fused_ordering(890) 00:12:13.542 fused_ordering(891) 00:12:13.542 fused_ordering(892) 00:12:13.542 fused_ordering(893) 00:12:13.542 fused_ordering(894) 00:12:13.542 fused_ordering(895) 00:12:13.542 fused_ordering(896) 00:12:13.542 fused_ordering(897) 00:12:13.542 fused_ordering(898) 00:12:13.542 fused_ordering(899) 00:12:13.542 fused_ordering(900) 00:12:13.542 fused_ordering(901) 00:12:13.542 fused_ordering(902) 00:12:13.542 fused_ordering(903) 00:12:13.542 fused_ordering(904) 00:12:13.542 fused_ordering(905) 00:12:13.542 fused_ordering(906) 00:12:13.542 fused_ordering(907) 00:12:13.542 fused_ordering(908) 00:12:13.542 fused_ordering(909) 00:12:13.542 fused_ordering(910) 00:12:13.542 fused_ordering(911) 00:12:13.542 fused_ordering(912) 00:12:13.542 fused_ordering(913) 00:12:13.542 fused_ordering(914) 00:12:13.542 fused_ordering(915) 00:12:13.542 fused_ordering(916) 00:12:13.542 fused_ordering(917) 00:12:13.542 fused_ordering(918) 00:12:13.542 fused_ordering(919) 00:12:13.542 fused_ordering(920) 00:12:13.542 fused_ordering(921) 00:12:13.542 fused_ordering(922) 00:12:13.542 fused_ordering(923) 00:12:13.542 fused_ordering(924) 00:12:13.542 fused_ordering(925) 00:12:13.542 fused_ordering(926) 00:12:13.542 fused_ordering(927) 00:12:13.542 fused_ordering(928) 00:12:13.542 fused_ordering(929) 00:12:13.542 fused_ordering(930) 00:12:13.542 fused_ordering(931) 00:12:13.542 fused_ordering(932) 00:12:13.542 fused_ordering(933) 00:12:13.542 fused_ordering(934) 00:12:13.542 fused_ordering(935) 00:12:13.542 fused_ordering(936) 00:12:13.542 fused_ordering(937) 00:12:13.542 fused_ordering(938) 00:12:13.542 fused_ordering(939) 00:12:13.542 fused_ordering(940) 00:12:13.542 fused_ordering(941) 00:12:13.542 fused_ordering(942) 00:12:13.542 fused_ordering(943) 00:12:13.542 fused_ordering(944) 00:12:13.542 fused_ordering(945) 00:12:13.542 fused_ordering(946) 00:12:13.542 fused_ordering(947) 00:12:13.542 fused_ordering(948) 00:12:13.542 fused_ordering(949) 00:12:13.542 fused_ordering(950) 00:12:13.542 fused_ordering(951) 00:12:13.542 fused_ordering(952) 00:12:13.542 fused_ordering(953) 00:12:13.542 fused_ordering(954) 00:12:13.542 fused_ordering(955) 00:12:13.542 fused_ordering(956) 00:12:13.542 fused_ordering(957) 00:12:13.542 fused_ordering(958) 00:12:13.542 fused_ordering(959) 00:12:13.542 fused_ordering(960) 00:12:13.542 fused_ordering(961) 00:12:13.542 fused_ordering(962) 00:12:13.542 fused_ordering(963) 00:12:13.542 fused_ordering(964) 00:12:13.542 fused_ordering(965) 00:12:13.542 fused_ordering(966) 00:12:13.542 fused_ordering(967) 00:12:13.542 fused_ordering(968) 00:12:13.542 fused_ordering(969) 00:12:13.542 fused_ordering(970) 00:12:13.542 fused_ordering(971) 00:12:13.542 fused_ordering(972) 00:12:13.542 fused_ordering(973) 00:12:13.542 fused_ordering(974) 00:12:13.542 fused_ordering(975) 00:12:13.542 fused_ordering(976) 00:12:13.542 fused_ordering(977) 00:12:13.542 fused_ordering(978) 00:12:13.542 fused_ordering(979) 00:12:13.542 fused_ordering(980) 00:12:13.542 fused_ordering(981) 00:12:13.542 fused_ordering(982) 00:12:13.542 fused_ordering(983) 00:12:13.542 fused_ordering(984) 00:12:13.542 fused_ordering(985) 00:12:13.542 fused_ordering(986) 00:12:13.542 fused_ordering(987) 00:12:13.542 fused_ordering(988) 00:12:13.542 fused_ordering(989) 00:12:13.542 fused_ordering(990) 00:12:13.542 fused_ordering(991) 00:12:13.542 fused_ordering(992) 00:12:13.542 fused_ordering(993) 00:12:13.542 fused_ordering(994) 00:12:13.542 fused_ordering(995) 00:12:13.542 fused_ordering(996) 00:12:13.542 fused_ordering(997) 00:12:13.542 fused_ordering(998) 00:12:13.542 fused_ordering(999) 00:12:13.542 fused_ordering(1000) 00:12:13.542 fused_ordering(1001) 00:12:13.542 fused_ordering(1002) 00:12:13.542 fused_ordering(1003) 00:12:13.542 fused_ordering(1004) 00:12:13.542 fused_ordering(1005) 00:12:13.542 fused_ordering(1006) 00:12:13.542 fused_ordering(1007) 00:12:13.542 fused_ordering(1008) 00:12:13.542 fused_ordering(1009) 00:12:13.542 fused_ordering(1010) 00:12:13.542 fused_ordering(1011) 00:12:13.542 fused_ordering(1012) 00:12:13.542 fused_ordering(1013) 00:12:13.542 fused_ordering(1014) 00:12:13.542 fused_ordering(1015) 00:12:13.542 fused_ordering(1016) 00:12:13.542 fused_ordering(1017) 00:12:13.542 fused_ordering(1018) 00:12:13.542 fused_ordering(1019) 00:12:13.542 fused_ordering(1020) 00:12:13.542 fused_ordering(1021) 00:12:13.542 fused_ordering(1022) 00:12:13.542 fused_ordering(1023) 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.542 rmmod nvme_tcp 00:12:13.542 rmmod nvme_fabrics 00:12:13.542 rmmod nvme_keyring 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2986091 ']' 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2986091 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2986091 ']' 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2986091 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2986091 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2986091' 00:12:13.542 killing process with pid 2986091 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2986091 00:12:13.542 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2986091 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.816 16:20:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.760 16:20:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.760 00:12:15.760 real 0m13.019s 00:12:15.760 user 0m7.108s 00:12:15.760 sys 0m6.820s 00:12:15.760 16:20:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:15.760 16:20:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.760 ************************************ 00:12:15.760 END TEST nvmf_fused_ordering 00:12:15.760 ************************************ 00:12:15.760 16:20:42 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:15.760 16:20:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:15.760 16:20:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:15.760 16:20:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.760 ************************************ 00:12:15.760 START TEST nvmf_delete_subsystem 00:12:15.760 ************************************ 00:12:15.760 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:16.021 * Looking for test storage... 00:12:16.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.021 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.022 16:20:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:24.166 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:24.166 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:24.166 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:24.166 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.166 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:12:24.167 00:12:24.167 --- 10.0.0.2 ping statistics --- 00:12:24.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.167 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:12:24.167 00:12:24.167 --- 10.0.0.1 ping statistics --- 00:12:24.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.167 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2990918 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2990918 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2990918 ']' 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:24.167 16:20:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 [2024-06-07 16:20:49.878336] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:12:24.167 [2024-06-07 16:20:49.878400] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.167 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.167 [2024-06-07 16:20:49.950824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:24.167 [2024-06-07 16:20:50.027375] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.167 [2024-06-07 16:20:50.027420] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.167 [2024-06-07 16:20:50.027429] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.167 [2024-06-07 16:20:50.027436] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.167 [2024-06-07 16:20:50.027442] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.167 [2024-06-07 16:20:50.027621] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.167 [2024-06-07 16:20:50.027724] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 [2024-06-07 16:20:50.691992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 [2024-06-07 16:20:50.716184] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 NULL1 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 Delay0 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2991129 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:24.167 16:20:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:24.167 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.167 [2024-06-07 16:20:50.812829] subsystem.c:1570:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:26.082 16:20:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.082 16:20:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.082 16:20:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 starting I/O failed: -6 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Write completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.343 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 [2024-06-07 16:20:53.099052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec040 is same with the state(5) to be set 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 starting I/O failed: -6 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 [2024-06-07 16:20:53.103446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f15f000c470 is same with the state(5) to be set 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:26.344 Write completed with error (sct=0, sc=8) 00:12:26.344 Read completed with error (sct=0, sc=8) 00:12:27.291 [2024-06-07 16:20:54.075477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccb550 is same with the state(5) to be set 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 [2024-06-07 16:20:54.101189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcebe60 is same with the state(5) to be set 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 [2024-06-07 16:20:54.101480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec220 is same with the state(5) to be set 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 [2024-06-07 16:20:54.105933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f15f000c780 is same with the state(5) to be set 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Write completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 Read completed with error (sct=0, sc=8) 00:12:27.292 [2024-06-07 16:20:54.106007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f15f000bfe0 is same with the state(5) to be set 00:12:27.292 Initializing NVMe Controllers 00:12:27.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.292 Controller IO queue size 128, less than required. 00:12:27.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:27.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:27.292 Initialization complete. Launching workers. 00:12:27.292 ======================================================== 00:12:27.292 Latency(us) 00:12:27.292 Device Information : IOPS MiB/s Average min max 00:12:27.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.36 0.09 881458.90 299.91 1006387.74 00:12:27.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 141.98 0.07 998392.17 218.45 2003265.12 00:12:27.292 ======================================================== 00:12:27.292 Total : 318.34 0.16 933612.24 218.45 2003265.12 00:12:27.292 00:12:27.292 [2024-06-07 16:20:54.106667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccb550 (9): Bad file descriptor 00:12:27.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:27.292 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.292 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:27.292 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2991129 00:12:27.292 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2991129 00:12:27.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2991129) - No such process 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2991129 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2991129 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2991129 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.863 [2024-06-07 16:20:54.638959] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.863 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2991814 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:27.864 16:20:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.864 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.864 [2024-06-07 16:20:54.703957] subsystem.c:1570:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:28.435 16:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:28.435 16:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:28.435 16:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:29.006 16:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:29.006 16:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:29.006 16:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:29.577 16:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:29.577 16:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:29.577 16:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:29.837 16:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:29.837 16:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:29.837 16:20:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:30.409 16:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:30.409 16:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:30.409 16:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:30.980 16:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:30.980 16:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:30.980 16:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:31.240 Initializing NVMe Controllers 00:12:31.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.240 Controller IO queue size 128, less than required. 00:12:31.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:31.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:31.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:31.240 Initialization complete. Launching workers. 00:12:31.240 ======================================================== 00:12:31.240 Latency(us) 00:12:31.240 Device Information : IOPS MiB/s Average min max 00:12:31.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002054.21 1000142.05 1007537.98 00:12:31.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003050.11 1000201.01 1041015.28 00:12:31.240 ======================================================== 00:12:31.240 Total : 256.00 0.12 1002552.16 1000142.05 1041015.28 00:12:31.240 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2991814 00:12:31.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2991814) - No such process 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2991814 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.501 rmmod nvme_tcp 00:12:31.501 rmmod nvme_fabrics 00:12:31.501 rmmod nvme_keyring 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:31.501 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2990918 ']' 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2990918 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2990918 ']' 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2990918 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2990918 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2990918' 00:12:31.502 killing process with pid 2990918 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2990918 00:12:31.502 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2990918 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.763 16:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.676 16:21:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.676 00:12:33.676 real 0m17.926s 00:12:33.676 user 0m31.098s 00:12:33.676 sys 0m6.192s 00:12:33.676 16:21:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:33.676 16:21:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 ************************************ 00:12:33.676 END TEST nvmf_delete_subsystem 00:12:33.676 ************************************ 00:12:33.937 16:21:00 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:33.937 16:21:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:33.937 16:21:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:33.937 16:21:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.937 ************************************ 00:12:33.937 START TEST nvmf_ns_masking 00:12:33.937 ************************************ 00:12:33.937 16:21:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:33.938 * Looking for test storage... 00:12:33.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=68cc3941-b780-40b3-93d0-bdcdaca6519e 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.938 16:21:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:42.084 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:42.084 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:42.084 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:42.084 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.084 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.804 ms 00:12:42.085 00:12:42.085 --- 10.0.0.2 ping statistics --- 00:12:42.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.085 rtt min/avg/max/mdev = 0.804/0.804/0.804/0.000 ms 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:12:42.085 00:12:42.085 --- 10.0.0.1 ping statistics --- 00:12:42.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.085 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2996910 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2996910 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2996910 ']' 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:42.085 16:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.085 [2024-06-07 16:21:07.785003] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:12:42.085 [2024-06-07 16:21:07.785053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.085 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.085 [2024-06-07 16:21:07.854201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.085 [2024-06-07 16:21:07.923803] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.085 [2024-06-07 16:21:07.923837] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.085 [2024-06-07 16:21:07.923845] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.085 [2024-06-07 16:21:07.923851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.085 [2024-06-07 16:21:07.923857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.085 [2024-06-07 16:21:07.923992] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.085 [2024-06-07 16:21:07.924113] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.085 [2024-06-07 16:21:07.924270] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.085 [2024-06-07 16:21:07.924271] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:42.085 [2024-06-07 16:21:08.742427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:42.085 16:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:42.085 Malloc1 00:12:42.346 16:21:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:42.346 Malloc2 00:12:42.346 16:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:42.607 16:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:42.867 16:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.867 [2024-06-07 16:21:09.615659] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.867 16:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:42.867 16:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68cc3941-b780-40b3-93d0-bdcdaca6519e -a 10.0.0.2 -s 4420 -i 4 00:12:43.127 16:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.127 16:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:43.127 16:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.127 16:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:43.127 16:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:45.078 [ 0]:0x1 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:45.078 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:45.339 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f37e5da7d664b7e9ce54e59834a3607 00:12:45.339 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f37e5da7d664b7e9ce54e59834a3607 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.339 16:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:45.339 [ 0]:0x1 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f37e5da7d664b7e9ce54e59834a3607 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f37e5da7d664b7e9ce54e59834a3607 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:45.339 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:45.602 [ 1]:0x2 00:12:45.602 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:45.602 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:45.602 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:45.602 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.602 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:45.602 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.863 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.863 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:46.124 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:46.124 16:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68cc3941-b780-40b3-93d0-bdcdaca6519e -a 10.0.0.2 -s 4420 -i 4 00:12:46.384 16:21:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:46.384 16:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:46.384 16:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.384 16:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:12:46.384 16:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:12:46.384 16:21:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:48.296 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:48.296 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:48.296 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.296 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.297 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:48.557 [ 0]:0x2 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.557 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:48.819 [ 0]:0x1 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f37e5da7d664b7e9ce54e59834a3607 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f37e5da7d664b7e9ce54e59834a3607 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:48.819 [ 1]:0x2 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.819 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:49.080 [ 0]:0x2 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:49.080 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:49.081 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:49.081 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:49.081 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:49.081 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.081 16:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.342 16:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:49.342 16:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 68cc3941-b780-40b3-93d0-bdcdaca6519e -a 10.0.0.2 -s 4420 -i 4 00:12:49.602 16:21:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:49.602 16:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:49.602 16:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.602 16:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:49.602 16:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:49.602 16:21:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:51.515 [ 0]:0x1 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f37e5da7d664b7e9ce54e59834a3607 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f37e5da7d664b7e9ce54e59834a3607 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:51.515 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:51.775 [ 1]:0x2 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:51.775 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:51.776 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.776 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:52.036 [ 0]:0x2 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.036 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:52.037 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:52.298 [2024-06-07 16:21:18.897011] nvmf_rpc.c:1793:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:52.298 request: 00:12:52.298 { 00:12:52.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.298 "nsid": 2, 00:12:52.298 "host": "nqn.2016-06.io.spdk:host1", 00:12:52.298 "method": "nvmf_ns_remove_host", 00:12:52.298 "req_id": 1 00:12:52.298 } 00:12:52.298 Got JSON-RPC error response 00:12:52.298 response: 00:12:52.298 { 00:12:52.298 "code": -32602, 00:12:52.298 "message": "Invalid parameters" 00:12:52.298 } 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:52.298 16:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:52.298 [ 0]:0x2 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=315af74669834031a66aa3e88bcb0cb4 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 315af74669834031a66aa3e88bcb0cb4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.298 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.559 rmmod nvme_tcp 00:12:52.559 rmmod nvme_fabrics 00:12:52.559 rmmod nvme_keyring 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2996910 ']' 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2996910 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2996910 ']' 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2996910 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2996910 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2996910' 00:12:52.559 killing process with pid 2996910 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2996910 00:12:52.559 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2996910 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.820 16:21:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.367 16:21:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.367 00:12:55.367 real 0m21.041s 00:12:55.367 user 0m50.975s 00:12:55.367 sys 0m6.762s 00:12:55.367 16:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:55.367 16:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:55.367 ************************************ 00:12:55.367 END TEST nvmf_ns_masking 00:12:55.367 ************************************ 00:12:55.367 16:21:21 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:55.367 16:21:21 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:55.367 16:21:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:55.367 16:21:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:55.367 16:21:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.367 ************************************ 00:12:55.367 START TEST nvmf_nvme_cli 00:12:55.367 ************************************ 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:55.367 * Looking for test storage... 00:12:55.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.367 16:21:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.956 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:01.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:01.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:01.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:01.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.957 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.218 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:13:02.219 00:13:02.219 --- 10.0.0.2 ping statistics --- 00:13:02.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.219 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:13:02.219 00:13:02.219 --- 10.0.0.1 ping statistics --- 00:13:02.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.219 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3003876 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3003876 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 3003876 ']' 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:02.219 16:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.219 [2024-06-07 16:21:29.003095] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:13:02.219 [2024-06-07 16:21:29.003155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.219 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.480 [2024-06-07 16:21:29.077649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.480 [2024-06-07 16:21:29.152103] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.480 [2024-06-07 16:21:29.152142] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.480 [2024-06-07 16:21:29.152150] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.480 [2024-06-07 16:21:29.152156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.480 [2024-06-07 16:21:29.152162] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.480 [2024-06-07 16:21:29.152303] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.480 [2024-06-07 16:21:29.152428] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.480 [2024-06-07 16:21:29.152573] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.480 [2024-06-07 16:21:29.152574] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 [2024-06-07 16:21:29.838027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 Malloc0 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 Malloc1 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.050 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.312 [2024-06-07 16:21:29.927947] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.312 16:21:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:03.312 00:13:03.312 Discovery Log Number of Records 2, Generation counter 2 00:13:03.312 =====Discovery Log Entry 0====== 00:13:03.312 trtype: tcp 00:13:03.312 adrfam: ipv4 00:13:03.312 subtype: current discovery subsystem 00:13:03.312 treq: not required 00:13:03.312 portid: 0 00:13:03.312 trsvcid: 4420 00:13:03.312 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:03.312 traddr: 10.0.0.2 00:13:03.312 eflags: explicit discovery connections, duplicate discovery information 00:13:03.312 sectype: none 00:13:03.312 =====Discovery Log Entry 1====== 00:13:03.312 trtype: tcp 00:13:03.312 adrfam: ipv4 00:13:03.312 subtype: nvme subsystem 00:13:03.312 treq: not required 00:13:03.312 portid: 0 00:13:03.312 trsvcid: 4420 00:13:03.312 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:03.312 traddr: 10.0.0.2 00:13:03.312 eflags: none 00:13:03.312 sectype: none 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:03.312 16:21:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.695 16:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:04.695 16:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:13:04.695 16:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.695 16:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:13:04.695 16:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:13:04.695 16:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:07.245 /dev/nvme0n1 ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:07.245 16:21:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.506 rmmod nvme_tcp 00:13:07.506 rmmod nvme_fabrics 00:13:07.506 rmmod nvme_keyring 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3003876 ']' 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3003876 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 3003876 ']' 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 3003876 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3003876 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3003876' 00:13:07.506 killing process with pid 3003876 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 3003876 00:13:07.506 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 3003876 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.770 16:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.747 16:21:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.747 00:13:09.747 real 0m14.817s 00:13:09.747 user 0m23.091s 00:13:09.747 sys 0m5.974s 00:13:09.747 16:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:09.747 16:21:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.747 ************************************ 00:13:09.747 END TEST nvmf_nvme_cli 00:13:09.747 ************************************ 00:13:09.747 16:21:36 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:09.747 16:21:36 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:09.747 16:21:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:09.747 16:21:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:09.747 16:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.009 ************************************ 00:13:10.009 START TEST nvmf_vfio_user 00:13:10.009 ************************************ 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:10.009 * Looking for test storage... 00:13:10.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3005469 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3005469' 00:13:10.009 Process pid: 3005469 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3005469 00:13:10.009 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 3005469 ']' 00:13:10.010 16:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:10.010 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.010 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:10.010 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.010 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:10.010 16:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:10.010 [2024-06-07 16:21:36.794845] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:13:10.010 [2024-06-07 16:21:36.794912] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.010 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.271 [2024-06-07 16:21:36.862516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.271 [2024-06-07 16:21:36.938573] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.271 [2024-06-07 16:21:36.938613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.271 [2024-06-07 16:21:36.938621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.271 [2024-06-07 16:21:36.938627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.271 [2024-06-07 16:21:36.938632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.271 [2024-06-07 16:21:36.938710] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.271 [2024-06-07 16:21:36.938849] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.271 [2024-06-07 16:21:36.939008] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.271 [2024-06-07 16:21:36.939008] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.842 16:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:10.842 16:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:10.842 16:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:11.784 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:12.045 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:12.045 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:12.045 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.045 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:12.045 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:12.306 Malloc1 00:13:12.306 16:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:12.306 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:12.566 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:12.826 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.826 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:12.826 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:12.826 Malloc2 00:13:12.826 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:13.086 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:13.345 16:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:13.345 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:13.345 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:13.345 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.345 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:13.345 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:13.345 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:13.345 [2024-06-07 16:21:40.140035] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:13:13.345 [2024-06-07 16:21:40.140074] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006125 ] 00:13:13.345 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.345 [2024-06-07 16:21:40.173021] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:13.345 [2024-06-07 16:21:40.181769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:13.345 [2024-06-07 16:21:40.181790] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcb3ad45000 00:13:13.345 [2024-06-07 16:21:40.182765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.183765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.184769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.185776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.186778] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.187775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.188792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.189796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:13.345 [2024-06-07 16:21:40.190806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:13.345 [2024-06-07 16:21:40.190818] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcb3ad3a000 00:13:13.345 [2024-06-07 16:21:40.192147] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:13.607 [2024-06-07 16:21:40.209063] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:13.607 [2024-06-07 16:21:40.209084] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:13.607 [2024-06-07 16:21:40.213951] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:13.607 [2024-06-07 16:21:40.213996] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:13.607 [2024-06-07 16:21:40.214078] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:13.607 [2024-06-07 16:21:40.214095] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:13.607 [2024-06-07 16:21:40.214101] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:13.607 [2024-06-07 16:21:40.214945] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:13.607 [2024-06-07 16:21:40.214955] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:13.607 [2024-06-07 16:21:40.214962] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:13.607 [2024-06-07 16:21:40.215953] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:13.607 [2024-06-07 16:21:40.215963] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:13.607 [2024-06-07 16:21:40.215971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:13.607 [2024-06-07 16:21:40.216964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:13.607 [2024-06-07 16:21:40.216973] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:13.607 [2024-06-07 16:21:40.217971] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:13.607 [2024-06-07 16:21:40.217981] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:13.607 [2024-06-07 16:21:40.217986] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:13.607 [2024-06-07 16:21:40.217992] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:13.607 [2024-06-07 16:21:40.218098] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:13.607 [2024-06-07 16:21:40.218105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:13.607 [2024-06-07 16:21:40.218113] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:13.607 [2024-06-07 16:21:40.218975] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:13.607 [2024-06-07 16:21:40.219977] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:13.607 [2024-06-07 16:21:40.220985] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:13.607 [2024-06-07 16:21:40.221984] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.607 [2024-06-07 16:21:40.222055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:13.607 [2024-06-07 16:21:40.223000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:13.607 [2024-06-07 16:21:40.223008] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:13.607 [2024-06-07 16:21:40.223013] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223034] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:13.607 [2024-06-07 16:21:40.223046] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223063] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:13.607 [2024-06-07 16:21:40.223068] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:13.607 [2024-06-07 16:21:40.223081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223126] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:13.607 [2024-06-07 16:21:40.223131] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:13.607 [2024-06-07 16:21:40.223135] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:13.607 [2024-06-07 16:21:40.223142] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:13.607 [2024-06-07 16:21:40.223147] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:13.607 [2024-06-07 16:21:40.223151] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:13.607 [2024-06-07 16:21:40.223156] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223163] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.607 [2024-06-07 16:21:40.223201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.607 [2024-06-07 16:21:40.223209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.607 [2024-06-07 16:21:40.223217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:13.607 [2024-06-07 16:21:40.223222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223257] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:13.607 [2024-06-07 16:21:40.223262] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223269] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223275] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223349] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223357] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:13.607 [2024-06-07 16:21:40.223361] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:13.607 [2024-06-07 16:21:40.223368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223387] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:13.607 [2024-06-07 16:21:40.223395] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223408] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223415] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:13.607 [2024-06-07 16:21:40.223420] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:13.607 [2024-06-07 16:21:40.223426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223462] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223469] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:13.607 [2024-06-07 16:21:40.223473] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:13.607 [2024-06-07 16:21:40.223479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:13.607 [2024-06-07 16:21:40.223492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:13.607 [2024-06-07 16:21:40.223499] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223506] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223513] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223525] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223529] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:13.607 [2024-06-07 16:21:40.223534] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:13.607 [2024-06-07 16:21:40.223539] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:13.607 [2024-06-07 16:21:40.223558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223635] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:13.608 [2024-06-07 16:21:40.223640] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:13.608 [2024-06-07 16:21:40.223644] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:13.608 [2024-06-07 16:21:40.223648] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:13.608 [2024-06-07 16:21:40.223654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:13.608 [2024-06-07 16:21:40.223661] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:13.608 [2024-06-07 16:21:40.223665] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:13.608 [2024-06-07 16:21:40.223671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223678] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:13.608 [2024-06-07 16:21:40.223684] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:13.608 [2024-06-07 16:21:40.223690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223698] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:13.608 [2024-06-07 16:21:40.223702] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:13.608 [2024-06-07 16:21:40.223708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:13.608 ===================================================== 00:13:13.608 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:13.608 ===================================================== 00:13:13.608 Controller Capabilities/Features 00:13:13.608 ================================ 00:13:13.608 Vendor ID: 4e58 00:13:13.608 Subsystem Vendor ID: 4e58 00:13:13.608 Serial Number: SPDK1 00:13:13.608 Model Number: SPDK bdev Controller 00:13:13.608 Firmware Version: 24.09 00:13:13.608 Recommended Arb Burst: 6 00:13:13.608 IEEE OUI Identifier: 8d 6b 50 00:13:13.608 Multi-path I/O 00:13:13.608 May have multiple subsystem ports: Yes 00:13:13.608 May have multiple controllers: Yes 00:13:13.608 Associated with SR-IOV VF: No 00:13:13.608 Max Data Transfer Size: 131072 00:13:13.608 Max Number of Namespaces: 32 00:13:13.608 Max Number of I/O Queues: 127 00:13:13.608 NVMe Specification Version (VS): 1.3 00:13:13.608 NVMe Specification Version (Identify): 1.3 00:13:13.608 Maximum Queue Entries: 256 00:13:13.608 Contiguous Queues Required: Yes 00:13:13.608 Arbitration Mechanisms Supported 00:13:13.608 Weighted Round Robin: Not Supported 00:13:13.608 Vendor Specific: Not Supported 00:13:13.608 Reset Timeout: 15000 ms 00:13:13.608 Doorbell Stride: 4 bytes 00:13:13.608 NVM Subsystem Reset: Not Supported 00:13:13.608 Command Sets Supported 00:13:13.608 NVM Command Set: Supported 00:13:13.608 Boot Partition: Not Supported 00:13:13.608 Memory Page Size Minimum: 4096 bytes 00:13:13.608 Memory Page Size Maximum: 4096 bytes 00:13:13.608 Persistent Memory Region: Not Supported 00:13:13.608 Optional Asynchronous Events Supported 00:13:13.608 Namespace Attribute Notices: Supported 00:13:13.608 Firmware Activation Notices: Not Supported 00:13:13.608 ANA Change Notices: Not Supported 00:13:13.608 PLE Aggregate Log Change Notices: Not Supported 00:13:13.608 LBA Status Info Alert Notices: Not Supported 00:13:13.608 EGE Aggregate Log Change Notices: Not Supported 00:13:13.608 Normal NVM Subsystem Shutdown event: Not Supported 00:13:13.608 Zone Descriptor Change Notices: Not Supported 00:13:13.608 Discovery Log Change Notices: Not Supported 00:13:13.608 Controller Attributes 00:13:13.608 128-bit Host Identifier: Supported 00:13:13.608 Non-Operational Permissive Mode: Not Supported 00:13:13.608 NVM Sets: Not Supported 00:13:13.608 Read Recovery Levels: Not Supported 00:13:13.608 Endurance Groups: Not Supported 00:13:13.608 Predictable Latency Mode: Not Supported 00:13:13.608 Traffic Based Keep ALive: Not Supported 00:13:13.608 Namespace Granularity: Not Supported 00:13:13.608 SQ Associations: Not Supported 00:13:13.608 UUID List: Not Supported 00:13:13.608 Multi-Domain Subsystem: Not Supported 00:13:13.608 Fixed Capacity Management: Not Supported 00:13:13.608 Variable Capacity Management: Not Supported 00:13:13.608 Delete Endurance Group: Not Supported 00:13:13.608 Delete NVM Set: Not Supported 00:13:13.608 Extended LBA Formats Supported: Not Supported 00:13:13.608 Flexible Data Placement Supported: Not Supported 00:13:13.608 00:13:13.608 Controller Memory Buffer Support 00:13:13.608 ================================ 00:13:13.608 Supported: No 00:13:13.608 00:13:13.608 Persistent Memory Region Support 00:13:13.608 ================================ 00:13:13.608 Supported: No 00:13:13.608 00:13:13.608 Admin Command Set Attributes 00:13:13.608 ============================ 00:13:13.608 Security Send/Receive: Not Supported 00:13:13.608 Format NVM: Not Supported 00:13:13.608 Firmware Activate/Download: Not Supported 00:13:13.608 Namespace Management: Not Supported 00:13:13.608 Device Self-Test: Not Supported 00:13:13.608 Directives: Not Supported 00:13:13.608 NVMe-MI: Not Supported 00:13:13.608 Virtualization Management: Not Supported 00:13:13.608 Doorbell Buffer Config: Not Supported 00:13:13.608 Get LBA Status Capability: Not Supported 00:13:13.608 Command & Feature Lockdown Capability: Not Supported 00:13:13.608 Abort Command Limit: 4 00:13:13.608 Async Event Request Limit: 4 00:13:13.608 Number of Firmware Slots: N/A 00:13:13.608 Firmware Slot 1 Read-Only: N/A 00:13:13.608 Firmware Activation Without Reset: N/A 00:13:13.608 Multiple Update Detection Support: N/A 00:13:13.608 Firmware Update Granularity: No Information Provided 00:13:13.608 Per-Namespace SMART Log: No 00:13:13.608 Asymmetric Namespace Access Log Page: Not Supported 00:13:13.608 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:13.608 Command Effects Log Page: Supported 00:13:13.608 Get Log Page Extended Data: Supported 00:13:13.608 Telemetry Log Pages: Not Supported 00:13:13.608 Persistent Event Log Pages: Not Supported 00:13:13.608 Supported Log Pages Log Page: May Support 00:13:13.608 Commands Supported & Effects Log Page: Not Supported 00:13:13.608 Feature Identifiers & Effects Log Page:May Support 00:13:13.608 NVMe-MI Commands & Effects Log Page: May Support 00:13:13.608 Data Area 4 for Telemetry Log: Not Supported 00:13:13.608 Error Log Page Entries Supported: 128 00:13:13.608 Keep Alive: Supported 00:13:13.608 Keep Alive Granularity: 10000 ms 00:13:13.608 00:13:13.608 NVM Command Set Attributes 00:13:13.608 ========================== 00:13:13.608 Submission Queue Entry Size 00:13:13.608 Max: 64 00:13:13.608 Min: 64 00:13:13.608 Completion Queue Entry Size 00:13:13.608 Max: 16 00:13:13.608 Min: 16 00:13:13.608 Number of Namespaces: 32 00:13:13.608 Compare Command: Supported 00:13:13.608 Write Uncorrectable Command: Not Supported 00:13:13.608 Dataset Management Command: Supported 00:13:13.608 Write Zeroes Command: Supported 00:13:13.608 Set Features Save Field: Not Supported 00:13:13.608 Reservations: Not Supported 00:13:13.608 Timestamp: Not Supported 00:13:13.608 Copy: Supported 00:13:13.608 Volatile Write Cache: Present 00:13:13.608 Atomic Write Unit (Normal): 1 00:13:13.608 Atomic Write Unit (PFail): 1 00:13:13.608 Atomic Compare & Write Unit: 1 00:13:13.608 Fused Compare & Write: Supported 00:13:13.608 Scatter-Gather List 00:13:13.608 SGL Command Set: Supported (Dword aligned) 00:13:13.608 SGL Keyed: Not Supported 00:13:13.608 SGL Bit Bucket Descriptor: Not Supported 00:13:13.608 SGL Metadata Pointer: Not Supported 00:13:13.608 Oversized SGL: Not Supported 00:13:13.608 SGL Metadata Address: Not Supported 00:13:13.608 SGL Offset: Not Supported 00:13:13.608 Transport SGL Data Block: Not Supported 00:13:13.608 Replay Protected Memory Block: Not Supported 00:13:13.608 00:13:13.608 Firmware Slot Information 00:13:13.608 ========================= 00:13:13.608 Active slot: 1 00:13:13.608 Slot 1 Firmware Revision: 24.09 00:13:13.608 00:13:13.608 00:13:13.608 Commands Supported and Effects 00:13:13.608 ============================== 00:13:13.608 Admin Commands 00:13:13.608 -------------- 00:13:13.608 Get Log Page (02h): Supported 00:13:13.608 Identify (06h): Supported 00:13:13.608 Abort (08h): Supported 00:13:13.608 Set Features (09h): Supported 00:13:13.608 Get Features (0Ah): Supported 00:13:13.608 Asynchronous Event Request (0Ch): Supported 00:13:13.608 Keep Alive (18h): Supported 00:13:13.608 I/O Commands 00:13:13.608 ------------ 00:13:13.608 Flush (00h): Supported LBA-Change 00:13:13.608 Write (01h): Supported LBA-Change 00:13:13.608 Read (02h): Supported 00:13:13.608 Compare (05h): Supported 00:13:13.608 Write Zeroes (08h): Supported LBA-Change 00:13:13.608 Dataset Management (09h): Supported LBA-Change 00:13:13.608 Copy (19h): Supported LBA-Change 00:13:13.608 Unknown (79h): Supported LBA-Change 00:13:13.608 Unknown (7Ah): Supported 00:13:13.608 00:13:13.608 Error Log 00:13:13.608 ========= 00:13:13.608 00:13:13.608 Arbitration 00:13:13.608 =========== 00:13:13.608 Arbitration Burst: 1 00:13:13.608 00:13:13.608 Power Management 00:13:13.608 ================ 00:13:13.608 Number of Power States: 1 00:13:13.608 Current Power State: Power State #0 00:13:13.608 Power State #0: 00:13:13.608 Max Power: 0.00 W 00:13:13.608 Non-Operational State: Operational 00:13:13.608 Entry Latency: Not Reported 00:13:13.608 Exit Latency: Not Reported 00:13:13.608 Relative Read Throughput: 0 00:13:13.608 Relative Read Latency: 0 00:13:13.608 Relative Write Throughput: 0 00:13:13.608 Relative Write Latency: 0 00:13:13.608 Idle Power: Not Reported 00:13:13.608 Active Power: Not Reported 00:13:13.608 Non-Operational Permissive Mode: Not Supported 00:13:13.608 00:13:13.608 Health Information 00:13:13.608 ================== 00:13:13.608 Critical Warnings: 00:13:13.608 Available Spare Space: OK 00:13:13.608 Temperature: OK 00:13:13.608 Device Reliability: OK 00:13:13.608 Read Only: No 00:13:13.608 Volatile Memory Backup: OK 00:13:13.608 Current Temperature: 0 Kelvin (-2[2024-06-07 16:21:40.223843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:13.608 [2024-06-07 16:21:40.223854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223877] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:13.608 [2024-06-07 16:21:40.223886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.223905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.608 [2024-06-07 16:21:40.227410] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:13.608 [2024-06-07 16:21:40.227420] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:13.608 [2024-06-07 16:21:40.228025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.608 [2024-06-07 16:21:40.228065] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:13.608 [2024-06-07 16:21:40.228071] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:13.608 [2024-06-07 16:21:40.229037] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:13.608 [2024-06-07 16:21:40.229047] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:13.608 [2024-06-07 16:21:40.229110] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:13.608 [2024-06-07 16:21:40.231053] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:13.608 73 Celsius) 00:13:13.608 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:13.608 Available Spare: 0% 00:13:13.608 Available Spare Threshold: 0% 00:13:13.608 Life Percentage Used: 0% 00:13:13.608 Data Units Read: 0 00:13:13.608 Data Units Written: 0 00:13:13.608 Host Read Commands: 0 00:13:13.608 Host Write Commands: 0 00:13:13.608 Controller Busy Time: 0 minutes 00:13:13.608 Power Cycles: 0 00:13:13.608 Power On Hours: 0 hours 00:13:13.608 Unsafe Shutdowns: 0 00:13:13.608 Unrecoverable Media Errors: 0 00:13:13.608 Lifetime Error Log Entries: 0 00:13:13.608 Warning Temperature Time: 0 minutes 00:13:13.608 Critical Temperature Time: 0 minutes 00:13:13.608 00:13:13.608 Number of Queues 00:13:13.608 ================ 00:13:13.608 Number of I/O Submission Queues: 127 00:13:13.608 Number of I/O Completion Queues: 127 00:13:13.608 00:13:13.608 Active Namespaces 00:13:13.608 ================= 00:13:13.608 Namespace ID:1 00:13:13.608 Error Recovery Timeout: Unlimited 00:13:13.608 Command Set Identifier: NVM (00h) 00:13:13.608 Deallocate: Supported 00:13:13.608 Deallocated/Unwritten Error: Not Supported 00:13:13.608 Deallocated Read Value: Unknown 00:13:13.608 Deallocate in Write Zeroes: Not Supported 00:13:13.608 Deallocated Guard Field: 0xFFFF 00:13:13.608 Flush: Supported 00:13:13.608 Reservation: Supported 00:13:13.608 Namespace Sharing Capabilities: Multiple Controllers 00:13:13.608 Size (in LBAs): 131072 (0GiB) 00:13:13.608 Capacity (in LBAs): 131072 (0GiB) 00:13:13.608 Utilization (in LBAs): 131072 (0GiB) 00:13:13.608 NGUID: 09FDA2C391334593A13A3D525C86D77E 00:13:13.608 UUID: 09fda2c3-9133-4593-a13a-3d525c86d77e 00:13:13.608 Thin Provisioning: Not Supported 00:13:13.608 Per-NS Atomic Units: Yes 00:13:13.608 Atomic Boundary Size (Normal): 0 00:13:13.608 Atomic Boundary Size (PFail): 0 00:13:13.608 Atomic Boundary Offset: 0 00:13:13.608 Maximum Single Source Range Length: 65535 00:13:13.608 Maximum Copy Length: 65535 00:13:13.608 Maximum Source Range Count: 1 00:13:13.608 NGUID/EUI64 Never Reused: No 00:13:13.608 Namespace Write Protected: No 00:13:13.608 Number of LBA Formats: 1 00:13:13.608 Current LBA Format: LBA Format #00 00:13:13.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:13.608 00:13:13.608 16:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:13.608 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.608 [2024-06-07 16:21:40.414059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:18.893 Initializing NVMe Controllers 00:13:18.893 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:18.893 Initialization complete. Launching workers. 00:13:18.893 ======================================================== 00:13:18.893 Latency(us) 00:13:18.893 Device Information : IOPS MiB/s Average min max 00:13:18.893 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40184.71 156.97 3185.13 834.27 6817.21 00:13:18.893 ======================================================== 00:13:18.893 Total : 40184.71 156.97 3185.13 834.27 6817.21 00:13:18.893 00:13:18.893 [2024-06-07 16:21:45.437456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:18.893 16:21:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:18.893 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.893 [2024-06-07 16:21:45.611328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:24.180 Initializing NVMe Controllers 00:13:24.180 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:24.180 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:24.180 Initialization complete. Launching workers. 00:13:24.180 ======================================================== 00:13:24.180 Latency(us) 00:13:24.180 Device Information : IOPS MiB/s Average min max 00:13:24.180 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.97 62.72 7977.67 5993.99 9970.10 00:13:24.180 ======================================================== 00:13:24.180 Total : 16055.97 62.72 7977.67 5993.99 9970.10 00:13:24.180 00:13:24.180 [2024-06-07 16:21:50.652337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.180 16:21:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:24.180 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.180 [2024-06-07 16:21:50.835199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:29.467 [2024-06-07 16:21:55.904638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:29.467 Initializing NVMe Controllers 00:13:29.467 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:29.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:29.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:29.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:29.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:29.467 Initialization complete. Launching workers. 00:13:29.467 Starting thread on core 2 00:13:29.467 Starting thread on core 3 00:13:29.467 Starting thread on core 1 00:13:29.467 16:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:29.467 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.467 [2024-06-07 16:21:56.155670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.769 [2024-06-07 16:21:59.569570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:32.769 Initializing NVMe Controllers 00:13:32.769 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.769 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.769 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:32.769 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:32.769 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:32.769 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:32.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:32.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:32.769 Initialization complete. Launching workers. 00:13:32.769 Starting thread on core 1 with urgent priority queue 00:13:32.769 Starting thread on core 2 with urgent priority queue 00:13:32.769 Starting thread on core 3 with urgent priority queue 00:13:32.769 Starting thread on core 0 with urgent priority queue 00:13:32.769 SPDK bdev Controller (SPDK1 ) core 0: 6970.00 IO/s 14.35 secs/100000 ios 00:13:32.769 SPDK bdev Controller (SPDK1 ) core 1: 10239.33 IO/s 9.77 secs/100000 ios 00:13:32.769 SPDK bdev Controller (SPDK1 ) core 2: 6929.00 IO/s 14.43 secs/100000 ios 00:13:32.769 SPDK bdev Controller (SPDK1 ) core 3: 9111.67 IO/s 10.97 secs/100000 ios 00:13:32.769 ======================================================== 00:13:32.769 00:13:32.769 16:21:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:33.036 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.036 [2024-06-07 16:21:59.821695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.036 Initializing NVMe Controllers 00:13:33.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.036 Namespace ID: 1 size: 0GB 00:13:33.036 Initialization complete. 00:13:33.036 INFO: using host memory buffer for IO 00:13:33.036 Hello world! 00:13:33.036 [2024-06-07 16:21:59.855903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.295 16:21:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:33.295 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.295 [2024-06-07 16:22:00.115844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.705 Initializing NVMe Controllers 00:13:34.705 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.705 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.705 Initialization complete. Launching workers. 00:13:34.705 submit (in ns) avg, min, max = 8218.6, 3930.8, 6991362.5 00:13:34.705 complete (in ns) avg, min, max = 16666.3, 2375.0, 5992556.7 00:13:34.705 00:13:34.705 Submit histogram 00:13:34.705 ================ 00:13:34.705 Range in us Cumulative Count 00:13:34.705 3.920 - 3.947: 0.5079% ( 97) 00:13:34.705 3.947 - 3.973: 4.9432% ( 847) 00:13:34.705 3.973 - 4.000: 13.7770% ( 1687) 00:13:34.705 4.000 - 4.027: 26.6010% ( 2449) 00:13:34.705 4.027 - 4.053: 37.1315% ( 2011) 00:13:34.705 4.053 - 4.080: 47.2273% ( 1928) 00:13:34.705 4.080 - 4.107: 63.6226% ( 3131) 00:13:34.705 4.107 - 4.133: 79.2114% ( 2977) 00:13:34.705 4.133 - 4.160: 90.4435% ( 2145) 00:13:34.705 4.160 - 4.187: 96.0570% ( 1072) 00:13:34.705 4.187 - 4.213: 98.3924% ( 446) 00:13:34.705 4.213 - 4.240: 99.2145% ( 157) 00:13:34.705 4.240 - 4.267: 99.4554% ( 46) 00:13:34.705 4.267 - 4.293: 99.5078% ( 10) 00:13:34.705 4.293 - 4.320: 99.5130% ( 1) 00:13:34.705 4.560 - 4.587: 99.5182% ( 1) 00:13:34.705 4.613 - 4.640: 99.5235% ( 1) 00:13:34.705 4.747 - 4.773: 99.5287% ( 1) 00:13:34.705 4.800 - 4.827: 99.5340% ( 1) 00:13:34.705 4.880 - 4.907: 99.5392% ( 1) 00:13:34.705 5.120 - 5.147: 99.5444% ( 1) 00:13:34.705 5.147 - 5.173: 99.5497% ( 1) 00:13:34.705 5.440 - 5.467: 99.5549% ( 1) 00:13:34.705 5.547 - 5.573: 99.5601% ( 1) 00:13:34.705 5.573 - 5.600: 99.5654% ( 1) 00:13:34.705 5.813 - 5.840: 99.5706% ( 1) 00:13:34.705 5.920 - 5.947: 99.5863% ( 3) 00:13:34.705 6.000 - 6.027: 99.5916% ( 1) 00:13:34.705 6.027 - 6.053: 99.5968% ( 1) 00:13:34.705 6.053 - 6.080: 99.6020% ( 1) 00:13:34.705 6.080 - 6.107: 99.6125% ( 2) 00:13:34.705 6.160 - 6.187: 99.6177% ( 1) 00:13:34.705 6.213 - 6.240: 99.6387% ( 4) 00:13:34.705 6.267 - 6.293: 99.6439% ( 1) 00:13:34.705 6.320 - 6.347: 99.6544% ( 2) 00:13:34.705 6.427 - 6.453: 99.6649% ( 2) 00:13:34.705 6.533 - 6.560: 99.6753% ( 2) 00:13:34.705 6.560 - 6.587: 99.6806% ( 1) 00:13:34.705 6.587 - 6.613: 99.6858% ( 1) 00:13:34.705 6.667 - 6.693: 99.6963% ( 2) 00:13:34.705 6.747 - 6.773: 99.7068% ( 2) 00:13:34.705 6.773 - 6.800: 99.7172% ( 2) 00:13:34.705 6.827 - 6.880: 99.7277% ( 2) 00:13:34.705 6.880 - 6.933: 99.7382% ( 2) 00:13:34.705 6.933 - 6.987: 99.7434% ( 1) 00:13:34.705 6.987 - 7.040: 99.7539% ( 2) 00:13:34.705 7.040 - 7.093: 99.7591% ( 1) 00:13:34.705 7.200 - 7.253: 99.7748% ( 3) 00:13:34.705 7.253 - 7.307: 99.8010% ( 5) 00:13:34.705 7.307 - 7.360: 99.8115% ( 2) 00:13:34.705 7.360 - 7.413: 99.8167% ( 1) 00:13:34.705 7.413 - 7.467: 99.8220% ( 1) 00:13:34.705 7.467 - 7.520: 99.8324% ( 2) 00:13:34.705 7.520 - 7.573: 99.8429% ( 2) 00:13:34.705 7.573 - 7.627: 99.8481% ( 1) 00:13:34.705 7.680 - 7.733: 99.8534% ( 1) 00:13:34.705 7.733 - 7.787: 99.8586% ( 1) 00:13:34.706 7.840 - 7.893: 99.8691% ( 2) 00:13:34.706 7.893 - 7.947: 99.8743% ( 1) 00:13:34.706 8.000 - 8.053: 99.8796% ( 1) 00:13:34.706 8.107 - 8.160: 99.8848% ( 1) 00:13:34.706 8.267 - 8.320: 99.8900% ( 1) 00:13:34.706 9.120 - 9.173: 99.8953% ( 1) 00:13:34.706 12.747 - 12.800: 99.9005% ( 1) 00:13:34.706 3986.773 - 4014.080: 99.9948% ( 18) 00:13:34.706 6990.507 - 7045.120: 100.0000% ( 1) 00:13:34.706 00:13:34.706 Complete histogram 00:13:34.706 ================== 00:13:34.706 Range in us Cumulative Count 00:13:34.706 2.373 - 2.387: 0.0052% ( 1) 00:13:34.706 2.387 - 2.400: 0.5341% ( 101) 00:13:34.706 2.400 - 2.413: 1.2829% ( 143) 00:13:34.706 2.413 - 2.427: 1.3877% ( 20) 00:13:34.706 2.427 - 2.440: 1.5029% ( 22) 00:13:34.706 2.440 - 2.453: 1.5814% ( 15) 00:13:34.706 2.453 - 2.467: 44.6667% ( 8228) 00:13:34.706 2.467 - 2.480: 60.9991% ( 3119) 00:13:34.706 2.480 - 2.493: 71.5139% ( 2008) 00:13:34.706 2.493 - 2.507: 77.2582% ( 1097) 00:13:34.706 2.507 - 2.520: 80.8975% ( 695) 00:13:34.706 2.520 - [2024-06-07 16:22:01.136654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.706 2.533: 84.4635% ( 681) 00:13:34.706 2.533 - 2.547: 91.3180% ( 1309) 00:13:34.706 2.547 - 2.560: 95.5752% ( 813) 00:13:34.706 2.560 - 2.573: 97.3399% ( 337) 00:13:34.706 2.573 - 2.587: 98.5076% ( 223) 00:13:34.706 2.587 - 2.600: 99.1831% ( 129) 00:13:34.706 2.600 - 2.613: 99.3873% ( 39) 00:13:34.706 2.613 - 2.627: 99.4449% ( 11) 00:13:34.706 2.627 - 2.640: 99.4554% ( 2) 00:13:34.706 2.667 - 2.680: 99.4606% ( 1) 00:13:34.706 4.800 - 4.827: 99.4659% ( 1) 00:13:34.706 4.880 - 4.907: 99.4711% ( 1) 00:13:34.706 4.907 - 4.933: 99.4764% ( 1) 00:13:34.706 4.933 - 4.960: 99.4816% ( 1) 00:13:34.706 4.960 - 4.987: 99.4921% ( 2) 00:13:34.706 4.987 - 5.013: 99.4973% ( 1) 00:13:34.706 5.067 - 5.093: 99.5025% ( 1) 00:13:34.706 5.173 - 5.200: 99.5078% ( 1) 00:13:34.706 5.200 - 5.227: 99.5130% ( 1) 00:13:34.706 5.307 - 5.333: 99.5287% ( 3) 00:13:34.706 5.333 - 5.360: 99.5340% ( 1) 00:13:34.706 5.387 - 5.413: 99.5497% ( 3) 00:13:34.706 5.413 - 5.440: 99.5601% ( 2) 00:13:34.706 5.520 - 5.547: 99.5706% ( 2) 00:13:34.706 5.600 - 5.627: 99.5758% ( 1) 00:13:34.706 5.680 - 5.707: 99.5811% ( 1) 00:13:34.706 5.733 - 5.760: 99.5863% ( 1) 00:13:34.706 5.787 - 5.813: 99.5916% ( 1) 00:13:34.706 6.000 - 6.027: 99.5968% ( 1) 00:13:34.706 6.080 - 6.107: 99.6020% ( 1) 00:13:34.706 6.107 - 6.133: 99.6125% ( 2) 00:13:34.706 6.667 - 6.693: 99.6177% ( 1) 00:13:34.706 6.720 - 6.747: 99.6230% ( 1) 00:13:34.706 8.053 - 8.107: 99.6282% ( 1) 00:13:34.706 12.907 - 12.960: 99.6335% ( 1) 00:13:34.706 13.013 - 13.067: 99.6387% ( 1) 00:13:34.706 45.013 - 45.227: 99.6439% ( 1) 00:13:34.706 1017.173 - 1024.000: 99.6492% ( 1) 00:13:34.706 3986.773 - 4014.080: 99.9948% ( 66) 00:13:34.706 5980.160 - 6007.467: 100.0000% ( 1) 00:13:34.706 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:34.706 [ 00:13:34.706 { 00:13:34.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:34.706 "subtype": "Discovery", 00:13:34.706 "listen_addresses": [], 00:13:34.706 "allow_any_host": true, 00:13:34.706 "hosts": [] 00:13:34.706 }, 00:13:34.706 { 00:13:34.706 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:34.706 "subtype": "NVMe", 00:13:34.706 "listen_addresses": [ 00:13:34.706 { 00:13:34.706 "trtype": "VFIOUSER", 00:13:34.706 "adrfam": "IPv4", 00:13:34.706 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:34.706 "trsvcid": "0" 00:13:34.706 } 00:13:34.706 ], 00:13:34.706 "allow_any_host": true, 00:13:34.706 "hosts": [], 00:13:34.706 "serial_number": "SPDK1", 00:13:34.706 "model_number": "SPDK bdev Controller", 00:13:34.706 "max_namespaces": 32, 00:13:34.706 "min_cntlid": 1, 00:13:34.706 "max_cntlid": 65519, 00:13:34.706 "namespaces": [ 00:13:34.706 { 00:13:34.706 "nsid": 1, 00:13:34.706 "bdev_name": "Malloc1", 00:13:34.706 "name": "Malloc1", 00:13:34.706 "nguid": "09FDA2C391334593A13A3D525C86D77E", 00:13:34.706 "uuid": "09fda2c3-9133-4593-a13a-3d525c86d77e" 00:13:34.706 } 00:13:34.706 ] 00:13:34.706 }, 00:13:34.706 { 00:13:34.706 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:34.706 "subtype": "NVMe", 00:13:34.706 "listen_addresses": [ 00:13:34.706 { 00:13:34.706 "trtype": "VFIOUSER", 00:13:34.706 "adrfam": "IPv4", 00:13:34.706 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:34.706 "trsvcid": "0" 00:13:34.706 } 00:13:34.706 ], 00:13:34.706 "allow_any_host": true, 00:13:34.706 "hosts": [], 00:13:34.706 "serial_number": "SPDK2", 00:13:34.706 "model_number": "SPDK bdev Controller", 00:13:34.706 "max_namespaces": 32, 00:13:34.706 "min_cntlid": 1, 00:13:34.706 "max_cntlid": 65519, 00:13:34.706 "namespaces": [ 00:13:34.706 { 00:13:34.706 "nsid": 1, 00:13:34.706 "bdev_name": "Malloc2", 00:13:34.706 "name": "Malloc2", 00:13:34.706 "nguid": "74B23B3C2DA34EEDB46010F15076022C", 00:13:34.706 "uuid": "74b23b3c-2da3-4eed-b460-10f15076022c" 00:13:34.706 } 00:13:34.706 ] 00:13:34.706 } 00:13:34.706 ] 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3010406 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:34.706 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.706 Malloc3 00:13:34.706 [2024-06-07 16:22:01.531835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.706 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:34.968 [2024-06-07 16:22:01.687883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.968 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:34.968 Asynchronous Event Request test 00:13:34.968 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.968 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.968 Registering asynchronous event callbacks... 00:13:34.968 Starting namespace attribute notice tests for all controllers... 00:13:34.968 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:34.968 aer_cb - Changed Namespace 00:13:34.968 Cleaning up... 00:13:35.232 [ 00:13:35.232 { 00:13:35.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.232 "subtype": "Discovery", 00:13:35.232 "listen_addresses": [], 00:13:35.232 "allow_any_host": true, 00:13:35.232 "hosts": [] 00:13:35.232 }, 00:13:35.232 { 00:13:35.232 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:35.232 "subtype": "NVMe", 00:13:35.232 "listen_addresses": [ 00:13:35.232 { 00:13:35.232 "trtype": "VFIOUSER", 00:13:35.232 "adrfam": "IPv4", 00:13:35.232 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:35.232 "trsvcid": "0" 00:13:35.232 } 00:13:35.232 ], 00:13:35.232 "allow_any_host": true, 00:13:35.232 "hosts": [], 00:13:35.232 "serial_number": "SPDK1", 00:13:35.232 "model_number": "SPDK bdev Controller", 00:13:35.232 "max_namespaces": 32, 00:13:35.232 "min_cntlid": 1, 00:13:35.232 "max_cntlid": 65519, 00:13:35.232 "namespaces": [ 00:13:35.232 { 00:13:35.232 "nsid": 1, 00:13:35.232 "bdev_name": "Malloc1", 00:13:35.232 "name": "Malloc1", 00:13:35.232 "nguid": "09FDA2C391334593A13A3D525C86D77E", 00:13:35.232 "uuid": "09fda2c3-9133-4593-a13a-3d525c86d77e" 00:13:35.232 }, 00:13:35.232 { 00:13:35.232 "nsid": 2, 00:13:35.232 "bdev_name": "Malloc3", 00:13:35.232 "name": "Malloc3", 00:13:35.232 "nguid": "E5FA17298697433EB8642D1E095A2C9B", 00:13:35.232 "uuid": "e5fa1729-8697-433e-b864-2d1e095a2c9b" 00:13:35.232 } 00:13:35.232 ] 00:13:35.232 }, 00:13:35.232 { 00:13:35.232 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:35.232 "subtype": "NVMe", 00:13:35.232 "listen_addresses": [ 00:13:35.232 { 00:13:35.232 "trtype": "VFIOUSER", 00:13:35.232 "adrfam": "IPv4", 00:13:35.232 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:35.232 "trsvcid": "0" 00:13:35.232 } 00:13:35.232 ], 00:13:35.232 "allow_any_host": true, 00:13:35.232 "hosts": [], 00:13:35.232 "serial_number": "SPDK2", 00:13:35.232 "model_number": "SPDK bdev Controller", 00:13:35.232 "max_namespaces": 32, 00:13:35.232 "min_cntlid": 1, 00:13:35.232 "max_cntlid": 65519, 00:13:35.232 "namespaces": [ 00:13:35.232 { 00:13:35.232 "nsid": 1, 00:13:35.232 "bdev_name": "Malloc2", 00:13:35.232 "name": "Malloc2", 00:13:35.232 "nguid": "74B23B3C2DA34EEDB46010F15076022C", 00:13:35.232 "uuid": "74b23b3c-2da3-4eed-b460-10f15076022c" 00:13:35.232 } 00:13:35.232 ] 00:13:35.232 } 00:13:35.232 ] 00:13:35.232 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3010406 00:13:35.232 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:35.232 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:35.232 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:35.232 16:22:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:35.232 [2024-06-07 16:22:01.908415] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:13:35.232 [2024-06-07 16:22:01.908460] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010425 ] 00:13:35.232 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.232 [2024-06-07 16:22:01.941927] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:35.232 [2024-06-07 16:22:01.944139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:35.232 [2024-06-07 16:22:01.944160] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff6f4f24000 00:13:35.232 [2024-06-07 16:22:01.945143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.946149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.947154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.948162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.949170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.950175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.951184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.952186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:35.232 [2024-06-07 16:22:01.953194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:35.232 [2024-06-07 16:22:01.953207] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff6f4f19000 00:13:35.232 [2024-06-07 16:22:01.954539] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:35.232 [2024-06-07 16:22:01.974586] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:35.232 [2024-06-07 16:22:01.974605] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:35.232 [2024-06-07 16:22:01.976654] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:35.232 [2024-06-07 16:22:01.976697] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:35.232 [2024-06-07 16:22:01.976776] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:35.232 [2024-06-07 16:22:01.976792] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:35.232 [2024-06-07 16:22:01.976797] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:35.233 [2024-06-07 16:22:01.977656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:35.233 [2024-06-07 16:22:01.977665] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:35.233 [2024-06-07 16:22:01.977672] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:35.233 [2024-06-07 16:22:01.978661] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:35.233 [2024-06-07 16:22:01.978669] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:35.233 [2024-06-07 16:22:01.978676] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:35.233 [2024-06-07 16:22:01.979668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:35.233 [2024-06-07 16:22:01.979677] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:35.233 [2024-06-07 16:22:01.980672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:35.233 [2024-06-07 16:22:01.980680] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:35.233 [2024-06-07 16:22:01.980685] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:35.233 [2024-06-07 16:22:01.980692] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:35.233 [2024-06-07 16:22:01.980797] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:35.233 [2024-06-07 16:22:01.980802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:35.233 [2024-06-07 16:22:01.980807] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:35.233 [2024-06-07 16:22:01.981679] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:35.233 [2024-06-07 16:22:01.985407] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:35.233 [2024-06-07 16:22:01.985700] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:35.233 [2024-06-07 16:22:01.986707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:35.233 [2024-06-07 16:22:01.986744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:35.233 [2024-06-07 16:22:01.987720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:35.233 [2024-06-07 16:22:01.987729] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:35.233 [2024-06-07 16:22:01.987734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:01.987755] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:35.233 [2024-06-07 16:22:01.987766] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:01.987780] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:35.233 [2024-06-07 16:22:01.987785] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.233 [2024-06-07 16:22:01.987796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:01.996409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:01.996420] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:35.233 [2024-06-07 16:22:01.996425] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:35.233 [2024-06-07 16:22:01.996430] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:35.233 [2024-06-07 16:22:01.996437] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:35.233 [2024-06-07 16:22:01.996441] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:35.233 [2024-06-07 16:22:01.996446] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:35.233 [2024-06-07 16:22:01.996450] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:01.996458] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:01.996468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:02.004407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:02.004420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.233 [2024-06-07 16:22:02.004428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.233 [2024-06-07 16:22:02.004436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.233 [2024-06-07 16:22:02.004447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.233 [2024-06-07 16:22:02.004452] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.004460] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.004469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:02.012407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:02.012414] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:35.233 [2024-06-07 16:22:02.012419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.012426] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.012431] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.012440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:02.020408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:02.020461] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.020469] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.020476] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:35.233 [2024-06-07 16:22:02.020481] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:35.233 [2024-06-07 16:22:02.020487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:02.028407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:02.028425] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:35.233 [2024-06-07 16:22:02.028434] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.028442] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.028449] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:35.233 [2024-06-07 16:22:02.028453] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.233 [2024-06-07 16:22:02.028459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:02.036407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:02.036420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.036432] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:35.233 [2024-06-07 16:22:02.036440] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:35.233 [2024-06-07 16:22:02.036444] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.233 [2024-06-07 16:22:02.036450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.233 [2024-06-07 16:22:02.044407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:35.233 [2024-06-07 16:22:02.044416] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:35.234 [2024-06-07 16:22:02.044423] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:35.234 [2024-06-07 16:22:02.044431] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:35.234 [2024-06-07 16:22:02.044437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:35.234 [2024-06-07 16:22:02.044442] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:35.234 [2024-06-07 16:22:02.044447] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:35.234 [2024-06-07 16:22:02.044451] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:35.234 [2024-06-07 16:22:02.044456] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:35.234 [2024-06-07 16:22:02.044474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:35.234 [2024-06-07 16:22:02.052406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:35.234 [2024-06-07 16:22:02.052419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:35.234 [2024-06-07 16:22:02.060407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:35.234 [2024-06-07 16:22:02.060420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:35.234 [2024-06-07 16:22:02.068408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:35.234 [2024-06-07 16:22:02.068420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:35.234 [2024-06-07 16:22:02.076408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:35.234 [2024-06-07 16:22:02.076420] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:35.234 [2024-06-07 16:22:02.076425] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:35.234 [2024-06-07 16:22:02.076428] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:35.234 [2024-06-07 16:22:02.076432] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:35.234 [2024-06-07 16:22:02.076438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:35.234 [2024-06-07 16:22:02.076448] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:35.234 [2024-06-07 16:22:02.076452] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:35.234 [2024-06-07 16:22:02.076458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:35.234 [2024-06-07 16:22:02.076465] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:35.234 [2024-06-07 16:22:02.076469] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:35.234 [2024-06-07 16:22:02.076475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:35.234 [2024-06-07 16:22:02.076482] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:35.234 [2024-06-07 16:22:02.076486] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:35.234 [2024-06-07 16:22:02.076492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:35.500 [2024-06-07 16:22:02.084408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:35.500 [2024-06-07 16:22:02.084424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:35.500 [2024-06-07 16:22:02.084434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:35.500 [2024-06-07 16:22:02.084444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:35.500 ===================================================== 00:13:35.500 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:35.500 ===================================================== 00:13:35.500 Controller Capabilities/Features 00:13:35.500 ================================ 00:13:35.500 Vendor ID: 4e58 00:13:35.500 Subsystem Vendor ID: 4e58 00:13:35.500 Serial Number: SPDK2 00:13:35.500 Model Number: SPDK bdev Controller 00:13:35.500 Firmware Version: 24.09 00:13:35.500 Recommended Arb Burst: 6 00:13:35.500 IEEE OUI Identifier: 8d 6b 50 00:13:35.500 Multi-path I/O 00:13:35.500 May have multiple subsystem ports: Yes 00:13:35.500 May have multiple controllers: Yes 00:13:35.500 Associated with SR-IOV VF: No 00:13:35.500 Max Data Transfer Size: 131072 00:13:35.500 Max Number of Namespaces: 32 00:13:35.500 Max Number of I/O Queues: 127 00:13:35.500 NVMe Specification Version (VS): 1.3 00:13:35.500 NVMe Specification Version (Identify): 1.3 00:13:35.500 Maximum Queue Entries: 256 00:13:35.500 Contiguous Queues Required: Yes 00:13:35.500 Arbitration Mechanisms Supported 00:13:35.500 Weighted Round Robin: Not Supported 00:13:35.500 Vendor Specific: Not Supported 00:13:35.500 Reset Timeout: 15000 ms 00:13:35.500 Doorbell Stride: 4 bytes 00:13:35.500 NVM Subsystem Reset: Not Supported 00:13:35.500 Command Sets Supported 00:13:35.500 NVM Command Set: Supported 00:13:35.500 Boot Partition: Not Supported 00:13:35.500 Memory Page Size Minimum: 4096 bytes 00:13:35.500 Memory Page Size Maximum: 4096 bytes 00:13:35.500 Persistent Memory Region: Not Supported 00:13:35.500 Optional Asynchronous Events Supported 00:13:35.500 Namespace Attribute Notices: Supported 00:13:35.500 Firmware Activation Notices: Not Supported 00:13:35.500 ANA Change Notices: Not Supported 00:13:35.500 PLE Aggregate Log Change Notices: Not Supported 00:13:35.500 LBA Status Info Alert Notices: Not Supported 00:13:35.500 EGE Aggregate Log Change Notices: Not Supported 00:13:35.500 Normal NVM Subsystem Shutdown event: Not Supported 00:13:35.500 Zone Descriptor Change Notices: Not Supported 00:13:35.500 Discovery Log Change Notices: Not Supported 00:13:35.500 Controller Attributes 00:13:35.500 128-bit Host Identifier: Supported 00:13:35.500 Non-Operational Permissive Mode: Not Supported 00:13:35.500 NVM Sets: Not Supported 00:13:35.500 Read Recovery Levels: Not Supported 00:13:35.500 Endurance Groups: Not Supported 00:13:35.500 Predictable Latency Mode: Not Supported 00:13:35.500 Traffic Based Keep ALive: Not Supported 00:13:35.500 Namespace Granularity: Not Supported 00:13:35.500 SQ Associations: Not Supported 00:13:35.500 UUID List: Not Supported 00:13:35.500 Multi-Domain Subsystem: Not Supported 00:13:35.500 Fixed Capacity Management: Not Supported 00:13:35.500 Variable Capacity Management: Not Supported 00:13:35.500 Delete Endurance Group: Not Supported 00:13:35.500 Delete NVM Set: Not Supported 00:13:35.500 Extended LBA Formats Supported: Not Supported 00:13:35.500 Flexible Data Placement Supported: Not Supported 00:13:35.500 00:13:35.500 Controller Memory Buffer Support 00:13:35.500 ================================ 00:13:35.500 Supported: No 00:13:35.500 00:13:35.500 Persistent Memory Region Support 00:13:35.500 ================================ 00:13:35.500 Supported: No 00:13:35.500 00:13:35.500 Admin Command Set Attributes 00:13:35.500 ============================ 00:13:35.500 Security Send/Receive: Not Supported 00:13:35.500 Format NVM: Not Supported 00:13:35.500 Firmware Activate/Download: Not Supported 00:13:35.500 Namespace Management: Not Supported 00:13:35.500 Device Self-Test: Not Supported 00:13:35.500 Directives: Not Supported 00:13:35.500 NVMe-MI: Not Supported 00:13:35.500 Virtualization Management: Not Supported 00:13:35.500 Doorbell Buffer Config: Not Supported 00:13:35.500 Get LBA Status Capability: Not Supported 00:13:35.500 Command & Feature Lockdown Capability: Not Supported 00:13:35.500 Abort Command Limit: 4 00:13:35.500 Async Event Request Limit: 4 00:13:35.500 Number of Firmware Slots: N/A 00:13:35.500 Firmware Slot 1 Read-Only: N/A 00:13:35.500 Firmware Activation Without Reset: N/A 00:13:35.500 Multiple Update Detection Support: N/A 00:13:35.500 Firmware Update Granularity: No Information Provided 00:13:35.500 Per-Namespace SMART Log: No 00:13:35.500 Asymmetric Namespace Access Log Page: Not Supported 00:13:35.500 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:35.500 Command Effects Log Page: Supported 00:13:35.500 Get Log Page Extended Data: Supported 00:13:35.500 Telemetry Log Pages: Not Supported 00:13:35.500 Persistent Event Log Pages: Not Supported 00:13:35.500 Supported Log Pages Log Page: May Support 00:13:35.500 Commands Supported & Effects Log Page: Not Supported 00:13:35.500 Feature Identifiers & Effects Log Page:May Support 00:13:35.500 NVMe-MI Commands & Effects Log Page: May Support 00:13:35.500 Data Area 4 for Telemetry Log: Not Supported 00:13:35.500 Error Log Page Entries Supported: 128 00:13:35.500 Keep Alive: Supported 00:13:35.500 Keep Alive Granularity: 10000 ms 00:13:35.500 00:13:35.500 NVM Command Set Attributes 00:13:35.500 ========================== 00:13:35.500 Submission Queue Entry Size 00:13:35.500 Max: 64 00:13:35.500 Min: 64 00:13:35.500 Completion Queue Entry Size 00:13:35.500 Max: 16 00:13:35.500 Min: 16 00:13:35.500 Number of Namespaces: 32 00:13:35.500 Compare Command: Supported 00:13:35.500 Write Uncorrectable Command: Not Supported 00:13:35.500 Dataset Management Command: Supported 00:13:35.500 Write Zeroes Command: Supported 00:13:35.500 Set Features Save Field: Not Supported 00:13:35.500 Reservations: Not Supported 00:13:35.500 Timestamp: Not Supported 00:13:35.500 Copy: Supported 00:13:35.500 Volatile Write Cache: Present 00:13:35.500 Atomic Write Unit (Normal): 1 00:13:35.500 Atomic Write Unit (PFail): 1 00:13:35.500 Atomic Compare & Write Unit: 1 00:13:35.500 Fused Compare & Write: Supported 00:13:35.500 Scatter-Gather List 00:13:35.500 SGL Command Set: Supported (Dword aligned) 00:13:35.500 SGL Keyed: Not Supported 00:13:35.500 SGL Bit Bucket Descriptor: Not Supported 00:13:35.500 SGL Metadata Pointer: Not Supported 00:13:35.500 Oversized SGL: Not Supported 00:13:35.500 SGL Metadata Address: Not Supported 00:13:35.500 SGL Offset: Not Supported 00:13:35.500 Transport SGL Data Block: Not Supported 00:13:35.500 Replay Protected Memory Block: Not Supported 00:13:35.500 00:13:35.500 Firmware Slot Information 00:13:35.500 ========================= 00:13:35.500 Active slot: 1 00:13:35.500 Slot 1 Firmware Revision: 24.09 00:13:35.500 00:13:35.500 00:13:35.500 Commands Supported and Effects 00:13:35.500 ============================== 00:13:35.500 Admin Commands 00:13:35.500 -------------- 00:13:35.500 Get Log Page (02h): Supported 00:13:35.500 Identify (06h): Supported 00:13:35.500 Abort (08h): Supported 00:13:35.500 Set Features (09h): Supported 00:13:35.500 Get Features (0Ah): Supported 00:13:35.500 Asynchronous Event Request (0Ch): Supported 00:13:35.500 Keep Alive (18h): Supported 00:13:35.500 I/O Commands 00:13:35.500 ------------ 00:13:35.500 Flush (00h): Supported LBA-Change 00:13:35.500 Write (01h): Supported LBA-Change 00:13:35.500 Read (02h): Supported 00:13:35.500 Compare (05h): Supported 00:13:35.500 Write Zeroes (08h): Supported LBA-Change 00:13:35.500 Dataset Management (09h): Supported LBA-Change 00:13:35.500 Copy (19h): Supported LBA-Change 00:13:35.500 Unknown (79h): Supported LBA-Change 00:13:35.500 Unknown (7Ah): Supported 00:13:35.500 00:13:35.500 Error Log 00:13:35.500 ========= 00:13:35.500 00:13:35.500 Arbitration 00:13:35.500 =========== 00:13:35.500 Arbitration Burst: 1 00:13:35.500 00:13:35.500 Power Management 00:13:35.500 ================ 00:13:35.500 Number of Power States: 1 00:13:35.500 Current Power State: Power State #0 00:13:35.500 Power State #0: 00:13:35.500 Max Power: 0.00 W 00:13:35.501 Non-Operational State: Operational 00:13:35.501 Entry Latency: Not Reported 00:13:35.501 Exit Latency: Not Reported 00:13:35.501 Relative Read Throughput: 0 00:13:35.501 Relative Read Latency: 0 00:13:35.501 Relative Write Throughput: 0 00:13:35.501 Relative Write Latency: 0 00:13:35.501 Idle Power: Not Reported 00:13:35.501 Active Power: Not Reported 00:13:35.501 Non-Operational Permissive Mode: Not Supported 00:13:35.501 00:13:35.501 Health Information 00:13:35.501 ================== 00:13:35.501 Critical Warnings: 00:13:35.501 Available Spare Space: OK 00:13:35.501 Temperature: OK 00:13:35.501 Device Reliability: OK 00:13:35.501 Read Only: No 00:13:35.501 Volatile Memory Backup: OK 00:13:35.501 Current Temperature: 0 Kelvin (-2[2024-06-07 16:22:02.084548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:35.501 [2024-06-07 16:22:02.092407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:35.501 [2024-06-07 16:22:02.092433] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:35.501 [2024-06-07 16:22:02.092442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.501 [2024-06-07 16:22:02.092449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.501 [2024-06-07 16:22:02.092455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.501 [2024-06-07 16:22:02.092461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.501 [2024-06-07 16:22:02.092504] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:35.501 [2024-06-07 16:22:02.092514] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:35.501 [2024-06-07 16:22:02.093504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:35.501 [2024-06-07 16:22:02.093552] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:35.501 [2024-06-07 16:22:02.093559] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:35.501 [2024-06-07 16:22:02.094511] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:35.501 [2024-06-07 16:22:02.094522] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:35.501 [2024-06-07 16:22:02.094568] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:35.501 [2024-06-07 16:22:02.095951] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:35.501 73 Celsius) 00:13:35.501 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:35.501 Available Spare: 0% 00:13:35.501 Available Spare Threshold: 0% 00:13:35.501 Life Percentage Used: 0% 00:13:35.501 Data Units Read: 0 00:13:35.501 Data Units Written: 0 00:13:35.501 Host Read Commands: 0 00:13:35.501 Host Write Commands: 0 00:13:35.501 Controller Busy Time: 0 minutes 00:13:35.501 Power Cycles: 0 00:13:35.501 Power On Hours: 0 hours 00:13:35.501 Unsafe Shutdowns: 0 00:13:35.501 Unrecoverable Media Errors: 0 00:13:35.501 Lifetime Error Log Entries: 0 00:13:35.501 Warning Temperature Time: 0 minutes 00:13:35.501 Critical Temperature Time: 0 minutes 00:13:35.501 00:13:35.501 Number of Queues 00:13:35.501 ================ 00:13:35.501 Number of I/O Submission Queues: 127 00:13:35.501 Number of I/O Completion Queues: 127 00:13:35.501 00:13:35.501 Active Namespaces 00:13:35.501 ================= 00:13:35.501 Namespace ID:1 00:13:35.501 Error Recovery Timeout: Unlimited 00:13:35.501 Command Set Identifier: NVM (00h) 00:13:35.501 Deallocate: Supported 00:13:35.501 Deallocated/Unwritten Error: Not Supported 00:13:35.501 Deallocated Read Value: Unknown 00:13:35.501 Deallocate in Write Zeroes: Not Supported 00:13:35.501 Deallocated Guard Field: 0xFFFF 00:13:35.501 Flush: Supported 00:13:35.501 Reservation: Supported 00:13:35.501 Namespace Sharing Capabilities: Multiple Controllers 00:13:35.501 Size (in LBAs): 131072 (0GiB) 00:13:35.501 Capacity (in LBAs): 131072 (0GiB) 00:13:35.501 Utilization (in LBAs): 131072 (0GiB) 00:13:35.501 NGUID: 74B23B3C2DA34EEDB46010F15076022C 00:13:35.501 UUID: 74b23b3c-2da3-4eed-b460-10f15076022c 00:13:35.501 Thin Provisioning: Not Supported 00:13:35.501 Per-NS Atomic Units: Yes 00:13:35.501 Atomic Boundary Size (Normal): 0 00:13:35.501 Atomic Boundary Size (PFail): 0 00:13:35.501 Atomic Boundary Offset: 0 00:13:35.501 Maximum Single Source Range Length: 65535 00:13:35.501 Maximum Copy Length: 65535 00:13:35.501 Maximum Source Range Count: 1 00:13:35.501 NGUID/EUI64 Never Reused: No 00:13:35.501 Namespace Write Protected: No 00:13:35.501 Number of LBA Formats: 1 00:13:35.501 Current LBA Format: LBA Format #00 00:13:35.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:35.501 00:13:35.501 16:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:35.501 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.501 [2024-06-07 16:22:02.276471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:40.796 Initializing NVMe Controllers 00:13:40.796 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.796 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:40.796 Initialization complete. Launching workers. 00:13:40.796 ======================================================== 00:13:40.796 Latency(us) 00:13:40.796 Device Information : IOPS MiB/s Average min max 00:13:40.796 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39992.16 156.22 3200.49 831.86 10806.89 00:13:40.796 ======================================================== 00:13:40.796 Total : 39992.16 156.22 3200.49 831.86 10806.89 00:13:40.796 00:13:40.796 [2024-06-07 16:22:07.385603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:40.796 16:22:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:40.796 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.796 [2024-06-07 16:22:07.557175] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:46.093 Initializing NVMe Controllers 00:13:46.093 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:46.093 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:46.093 Initialization complete. Launching workers. 00:13:46.093 ======================================================== 00:13:46.093 Latency(us) 00:13:46.093 Device Information : IOPS MiB/s Average min max 00:13:46.093 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35996.36 140.61 3555.81 1100.30 10610.60 00:13:46.093 ======================================================== 00:13:46.093 Total : 35996.36 140.61 3555.81 1100.30 10610.60 00:13:46.093 00:13:46.093 [2024-06-07 16:22:12.574379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:46.093 16:22:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:46.093 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.093 [2024-06-07 16:22:12.767539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:51.395 [2024-06-07 16:22:17.894485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:51.395 Initializing NVMe Controllers 00:13:51.395 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:51.395 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:51.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:51.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:51.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:51.395 Initialization complete. Launching workers. 00:13:51.395 Starting thread on core 2 00:13:51.395 Starting thread on core 3 00:13:51.395 Starting thread on core 1 00:13:51.395 16:22:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:51.395 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.395 [2024-06-07 16:22:18.147870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.700 [2024-06-07 16:22:21.208058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.700 Initializing NVMe Controllers 00:13:54.700 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.700 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.700 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:54.700 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:54.700 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:54.700 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:54.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:54.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:54.700 Initialization complete. Launching workers. 00:13:54.700 Starting thread on core 1 with urgent priority queue 00:13:54.700 Starting thread on core 2 with urgent priority queue 00:13:54.700 Starting thread on core 3 with urgent priority queue 00:13:54.700 Starting thread on core 0 with urgent priority queue 00:13:54.700 SPDK bdev Controller (SPDK2 ) core 0: 11089.67 IO/s 9.02 secs/100000 ios 00:13:54.700 SPDK bdev Controller (SPDK2 ) core 1: 17799.67 IO/s 5.62 secs/100000 ios 00:13:54.700 SPDK bdev Controller (SPDK2 ) core 2: 14523.33 IO/s 6.89 secs/100000 ios 00:13:54.700 SPDK bdev Controller (SPDK2 ) core 3: 8027.67 IO/s 12.46 secs/100000 ios 00:13:54.700 ======================================================== 00:13:54.700 00:13:54.700 16:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.700 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.700 [2024-06-07 16:22:21.468860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.700 Initializing NVMe Controllers 00:13:54.700 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.700 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.700 Namespace ID: 1 size: 0GB 00:13:54.700 Initialization complete. 00:13:54.700 INFO: using host memory buffer for IO 00:13:54.700 Hello world! 00:13:54.700 [2024-06-07 16:22:21.478938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.700 16:22:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.961 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.961 [2024-06-07 16:22:21.735671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.349 Initializing NVMe Controllers 00:13:56.349 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.349 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.349 Initialization complete. Launching workers. 00:13:56.349 submit (in ns) avg, min, max = 8143.6, 3942.5, 5995008.3 00:13:56.349 complete (in ns) avg, min, max = 17703.9, 2391.7, 6990974.2 00:13:56.349 00:13:56.349 Submit histogram 00:13:56.349 ================ 00:13:56.349 Range in us Cumulative Count 00:13:56.349 3.920 - 3.947: 0.0675% ( 13) 00:13:56.349 3.947 - 3.973: 2.5286% ( 474) 00:13:56.349 3.973 - 4.000: 9.9740% ( 1434) 00:13:56.349 4.000 - 4.027: 19.3769% ( 1811) 00:13:56.349 4.027 - 4.053: 30.2285% ( 2090) 00:13:56.349 4.053 - 4.080: 41.2617% ( 2125) 00:13:56.349 4.080 - 4.107: 54.5898% ( 2567) 00:13:56.349 4.107 - 4.133: 70.7632% ( 3115) 00:13:56.349 4.133 - 4.160: 85.3271% ( 2805) 00:13:56.349 4.160 - 4.187: 94.5431% ( 1775) 00:13:56.349 4.187 - 4.213: 98.2970% ( 723) 00:13:56.349 4.213 - 4.240: 99.2575% ( 185) 00:13:56.349 4.240 - 4.267: 99.4496% ( 37) 00:13:56.349 4.267 - 4.293: 99.5016% ( 10) 00:13:56.349 4.293 - 4.320: 99.5223% ( 4) 00:13:56.349 4.640 - 4.667: 99.5275% ( 1) 00:13:56.349 4.720 - 4.747: 99.5327% ( 1) 00:13:56.349 4.907 - 4.933: 99.5379% ( 1) 00:13:56.349 5.227 - 5.253: 99.5431% ( 1) 00:13:56.349 5.387 - 5.413: 99.5483% ( 1) 00:13:56.349 5.413 - 5.440: 99.5535% ( 1) 00:13:56.349 5.520 - 5.547: 99.5587% ( 1) 00:13:56.349 5.573 - 5.600: 99.5639% ( 1) 00:13:56.349 5.733 - 5.760: 99.5691% ( 1) 00:13:56.349 5.813 - 5.840: 99.5742% ( 1) 00:13:56.349 5.867 - 5.893: 99.5794% ( 1) 00:13:56.349 6.000 - 6.027: 99.5898% ( 2) 00:13:56.349 6.053 - 6.080: 99.6002% ( 2) 00:13:56.349 6.080 - 6.107: 99.6106% ( 2) 00:13:56.349 6.107 - 6.133: 99.6262% ( 3) 00:13:56.349 6.133 - 6.160: 99.6417% ( 3) 00:13:56.349 6.160 - 6.187: 99.6469% ( 1) 00:13:56.349 6.187 - 6.213: 99.6573% ( 2) 00:13:56.349 6.213 - 6.240: 99.6625% ( 1) 00:13:56.349 6.293 - 6.320: 99.6677% ( 1) 00:13:56.349 6.347 - 6.373: 99.6729% ( 1) 00:13:56.349 6.373 - 6.400: 99.6937% ( 4) 00:13:56.349 6.400 - 6.427: 99.7092% ( 3) 00:13:56.349 6.427 - 6.453: 99.7196% ( 2) 00:13:56.349 6.453 - 6.480: 99.7404% ( 4) 00:13:56.349 6.480 - 6.507: 99.7508% ( 2) 00:13:56.349 6.560 - 6.587: 99.7560% ( 1) 00:13:56.349 6.613 - 6.640: 99.7664% ( 2) 00:13:56.349 6.640 - 6.667: 99.7767% ( 2) 00:13:56.349 6.720 - 6.747: 99.7819% ( 1) 00:13:56.349 6.747 - 6.773: 99.7871% ( 1) 00:13:56.349 6.773 - 6.800: 99.7923% ( 1) 00:13:56.349 6.827 - 6.880: 99.8027% ( 2) 00:13:56.349 6.933 - 6.987: 99.8131% ( 2) 00:13:56.349 7.040 - 7.093: 99.8235% ( 2) 00:13:56.349 7.147 - 7.200: 99.8287% ( 1) 00:13:56.349 7.200 - 7.253: 99.8442% ( 3) 00:13:56.349 7.253 - 7.307: 99.8494% ( 1) 00:13:56.349 7.307 - 7.360: 99.8598% ( 2) 00:13:56.349 7.360 - 7.413: 99.8650% ( 1) 00:13:56.349 7.413 - 7.467: 99.8702% ( 1) 00:13:56.349 7.467 - 7.520: 99.8754% ( 1) 00:13:56.349 7.520 - 7.573: 99.8806% ( 1) 00:13:56.349 7.573 - 7.627: 99.8858% ( 1) 00:13:56.349 7.680 - 7.733: 99.8910% ( 1) 00:13:56.349 8.267 - 8.320: 99.8962% ( 1) 00:13:56.349 10.453 - 10.507: 99.9013% ( 1) 00:13:56.349 3986.773 - 4014.080: 99.9948% ( 18) 00:13:56.349 5980.160 - 6007.467: 100.0000% ( 1) 00:13:56.349 00:13:56.349 Complete histogram 00:13:56.349 ================== 00:13:56.349 Range in us Cumulative Count 00:13:56.349 2.387 - 2.400: 0.0156% ( 3) 00:13:56.349 2.400 - 2.413: 1.1890% ( 226) 00:13:56.349 2.413 - 2.427: 1.4746% ( 55) 00:13:56.349 2.427 - 2.440: 1.5992% ( 24) 00:13:56.349 2.440 - 2.453: 34.7871% ( 6392) 00:13:56.349 2.453 - 2.467: 66.2617% ( 6062) 00:13:56.349 2.467 - 2.480: 71.4538% ( 1000) 00:13:56.349 2.480 - 2.493: 78.2606% ( 1311) 00:13:56.349 2.493 - 2.507: 80.8930% ( 507) 00:13:56.349 2.507 - 2.520: 83.3229% ( 468) 00:13:56.349 2.520 - 2.533: 89.1900% ( 1130) 00:13:56.349 2.533 - 2.547: 95.1610% ( 1150) 00:13:56.349 2.547 - 2.560: 97.4143% ( 434) 00:13:56.349 2.560 - [2024-06-07 16:22:22.831064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.349 2.573: 98.5877% ( 226) 00:13:56.349 2.573 - 2.587: 99.1070% ( 100) 00:13:56.349 2.587 - 2.600: 99.3302% ( 43) 00:13:56.349 2.600 - 2.613: 99.3718% ( 8) 00:13:56.349 2.613 - 2.627: 99.3769% ( 1) 00:13:56.349 4.347 - 4.373: 99.3821% ( 1) 00:13:56.349 4.533 - 4.560: 99.3977% ( 3) 00:13:56.349 4.587 - 4.613: 99.4029% ( 1) 00:13:56.349 4.640 - 4.667: 99.4081% ( 1) 00:13:56.349 4.693 - 4.720: 99.4185% ( 2) 00:13:56.349 4.747 - 4.773: 99.4237% ( 1) 00:13:56.349 4.773 - 4.800: 99.4341% ( 2) 00:13:56.349 4.800 - 4.827: 99.4444% ( 2) 00:13:56.349 4.827 - 4.853: 99.4496% ( 1) 00:13:56.349 4.853 - 4.880: 99.4548% ( 1) 00:13:56.349 4.880 - 4.907: 99.4652% ( 2) 00:13:56.349 4.933 - 4.960: 99.4756% ( 2) 00:13:56.349 5.013 - 5.040: 99.4808% ( 1) 00:13:56.349 5.040 - 5.067: 99.4912% ( 2) 00:13:56.349 5.067 - 5.093: 99.4964% ( 1) 00:13:56.349 5.093 - 5.120: 99.5016% ( 1) 00:13:56.349 5.173 - 5.200: 99.5067% ( 1) 00:13:56.349 5.280 - 5.307: 99.5119% ( 1) 00:13:56.349 5.307 - 5.333: 99.5171% ( 1) 00:13:56.349 5.387 - 5.413: 99.5275% ( 2) 00:13:56.349 5.467 - 5.493: 99.5327% ( 1) 00:13:56.349 5.573 - 5.600: 99.5379% ( 1) 00:13:56.349 5.653 - 5.680: 99.5431% ( 1) 00:13:56.349 5.787 - 5.813: 99.5483% ( 1) 00:13:56.349 5.867 - 5.893: 99.5535% ( 1) 00:13:56.349 5.920 - 5.947: 99.5587% ( 1) 00:13:56.349 6.133 - 6.160: 99.5639% ( 1) 00:13:56.349 6.667 - 6.693: 99.5691% ( 1) 00:13:56.349 6.827 - 6.880: 99.5742% ( 1) 00:13:56.349 7.360 - 7.413: 99.5794% ( 1) 00:13:56.349 10.293 - 10.347: 99.5846% ( 1) 00:13:56.349 10.507 - 10.560: 99.5898% ( 1) 00:13:56.349 11.040 - 11.093: 99.5950% ( 1) 00:13:56.349 13.867 - 13.973: 99.6002% ( 1) 00:13:56.349 14.507 - 14.613: 99.6054% ( 1) 00:13:56.349 44.373 - 44.587: 99.6106% ( 1) 00:13:56.349 167.253 - 168.107: 99.6158% ( 1) 00:13:56.349 315.733 - 317.440: 99.6210% ( 1) 00:13:56.349 2088.960 - 2102.613: 99.6262% ( 1) 00:13:56.349 3986.773 - 4014.080: 99.9948% ( 71) 00:13:56.349 6990.507 - 7045.120: 100.0000% ( 1) 00:13:56.349 00:13:56.349 16:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:56.349 16:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:56.349 16:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:56.349 16:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:56.349 16:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.349 [ 00:13:56.349 { 00:13:56.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.349 "subtype": "Discovery", 00:13:56.349 "listen_addresses": [], 00:13:56.349 "allow_any_host": true, 00:13:56.349 "hosts": [] 00:13:56.349 }, 00:13:56.349 { 00:13:56.349 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.349 "subtype": "NVMe", 00:13:56.349 "listen_addresses": [ 00:13:56.349 { 00:13:56.349 "trtype": "VFIOUSER", 00:13:56.349 "adrfam": "IPv4", 00:13:56.349 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.349 "trsvcid": "0" 00:13:56.349 } 00:13:56.349 ], 00:13:56.349 "allow_any_host": true, 00:13:56.349 "hosts": [], 00:13:56.349 "serial_number": "SPDK1", 00:13:56.349 "model_number": "SPDK bdev Controller", 00:13:56.349 "max_namespaces": 32, 00:13:56.349 "min_cntlid": 1, 00:13:56.349 "max_cntlid": 65519, 00:13:56.349 "namespaces": [ 00:13:56.349 { 00:13:56.349 "nsid": 1, 00:13:56.350 "bdev_name": "Malloc1", 00:13:56.350 "name": "Malloc1", 00:13:56.350 "nguid": "09FDA2C391334593A13A3D525C86D77E", 00:13:56.350 "uuid": "09fda2c3-9133-4593-a13a-3d525c86d77e" 00:13:56.350 }, 00:13:56.350 { 00:13:56.350 "nsid": 2, 00:13:56.350 "bdev_name": "Malloc3", 00:13:56.350 "name": "Malloc3", 00:13:56.350 "nguid": "E5FA17298697433EB8642D1E095A2C9B", 00:13:56.350 "uuid": "e5fa1729-8697-433e-b864-2d1e095a2c9b" 00:13:56.350 } 00:13:56.350 ] 00:13:56.350 }, 00:13:56.350 { 00:13:56.350 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.350 "subtype": "NVMe", 00:13:56.350 "listen_addresses": [ 00:13:56.350 { 00:13:56.350 "trtype": "VFIOUSER", 00:13:56.350 "adrfam": "IPv4", 00:13:56.350 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.350 "trsvcid": "0" 00:13:56.350 } 00:13:56.350 ], 00:13:56.350 "allow_any_host": true, 00:13:56.350 "hosts": [], 00:13:56.350 "serial_number": "SPDK2", 00:13:56.350 "model_number": "SPDK bdev Controller", 00:13:56.350 "max_namespaces": 32, 00:13:56.350 "min_cntlid": 1, 00:13:56.350 "max_cntlid": 65519, 00:13:56.350 "namespaces": [ 00:13:56.350 { 00:13:56.350 "nsid": 1, 00:13:56.350 "bdev_name": "Malloc2", 00:13:56.350 "name": "Malloc2", 00:13:56.350 "nguid": "74B23B3C2DA34EEDB46010F15076022C", 00:13:56.350 "uuid": "74b23b3c-2da3-4eed-b460-10f15076022c" 00:13:56.350 } 00:13:56.350 ] 00:13:56.350 } 00:13:56.350 ] 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3014483 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:56.350 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:56.350 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.609 Malloc4 00:13:56.609 [2024-06-07 16:22:23.215802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.609 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:56.609 [2024-06-07 16:22:23.389023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.609 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.609 Asynchronous Event Request test 00:13:56.609 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.609 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.610 Registering asynchronous event callbacks... 00:13:56.610 Starting namespace attribute notice tests for all controllers... 00:13:56.610 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:56.610 aer_cb - Changed Namespace 00:13:56.610 Cleaning up... 00:13:56.871 [ 00:13:56.871 { 00:13:56.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.871 "subtype": "Discovery", 00:13:56.871 "listen_addresses": [], 00:13:56.871 "allow_any_host": true, 00:13:56.871 "hosts": [] 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.871 "subtype": "NVMe", 00:13:56.871 "listen_addresses": [ 00:13:56.871 { 00:13:56.871 "trtype": "VFIOUSER", 00:13:56.871 "adrfam": "IPv4", 00:13:56.871 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.871 "trsvcid": "0" 00:13:56.871 } 00:13:56.871 ], 00:13:56.871 "allow_any_host": true, 00:13:56.871 "hosts": [], 00:13:56.871 "serial_number": "SPDK1", 00:13:56.871 "model_number": "SPDK bdev Controller", 00:13:56.871 "max_namespaces": 32, 00:13:56.871 "min_cntlid": 1, 00:13:56.871 "max_cntlid": 65519, 00:13:56.871 "namespaces": [ 00:13:56.871 { 00:13:56.871 "nsid": 1, 00:13:56.871 "bdev_name": "Malloc1", 00:13:56.871 "name": "Malloc1", 00:13:56.871 "nguid": "09FDA2C391334593A13A3D525C86D77E", 00:13:56.871 "uuid": "09fda2c3-9133-4593-a13a-3d525c86d77e" 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "nsid": 2, 00:13:56.871 "bdev_name": "Malloc3", 00:13:56.871 "name": "Malloc3", 00:13:56.871 "nguid": "E5FA17298697433EB8642D1E095A2C9B", 00:13:56.871 "uuid": "e5fa1729-8697-433e-b864-2d1e095a2c9b" 00:13:56.871 } 00:13:56.871 ] 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.871 "subtype": "NVMe", 00:13:56.871 "listen_addresses": [ 00:13:56.871 { 00:13:56.871 "trtype": "VFIOUSER", 00:13:56.871 "adrfam": "IPv4", 00:13:56.871 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.871 "trsvcid": "0" 00:13:56.871 } 00:13:56.871 ], 00:13:56.871 "allow_any_host": true, 00:13:56.871 "hosts": [], 00:13:56.871 "serial_number": "SPDK2", 00:13:56.871 "model_number": "SPDK bdev Controller", 00:13:56.871 "max_namespaces": 32, 00:13:56.871 "min_cntlid": 1, 00:13:56.871 "max_cntlid": 65519, 00:13:56.871 "namespaces": [ 00:13:56.871 { 00:13:56.871 "nsid": 1, 00:13:56.871 "bdev_name": "Malloc2", 00:13:56.871 "name": "Malloc2", 00:13:56.871 "nguid": "74B23B3C2DA34EEDB46010F15076022C", 00:13:56.871 "uuid": "74b23b3c-2da3-4eed-b460-10f15076022c" 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "nsid": 2, 00:13:56.871 "bdev_name": "Malloc4", 00:13:56.871 "name": "Malloc4", 00:13:56.871 "nguid": "61F201994ED847DF85B71DBD3F27C378", 00:13:56.871 "uuid": "61f20199-4ed8-47df-85b7-1dbd3f27c378" 00:13:56.871 } 00:13:56.871 ] 00:13:56.871 } 00:13:56.871 ] 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3014483 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3005469 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 3005469 ']' 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 3005469 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3005469 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3005469' 00:13:56.871 killing process with pid 3005469 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 3005469 00:13:56.871 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 3005469 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3014793 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3014793' 00:13:57.132 Process pid: 3014793 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3014793 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 3014793 ']' 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:57.132 16:22:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:57.132 [2024-06-07 16:22:23.871210] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:57.132 [2024-06-07 16:22:23.872114] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:13:57.132 [2024-06-07 16:22:23.872154] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.132 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.132 [2024-06-07 16:22:23.934736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.394 [2024-06-07 16:22:23.999090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.394 [2024-06-07 16:22:23.999130] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.394 [2024-06-07 16:22:23.999139] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.394 [2024-06-07 16:22:23.999145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.394 [2024-06-07 16:22:23.999150] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.394 [2024-06-07 16:22:23.999276] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.394 [2024-06-07 16:22:23.999414] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.394 [2024-06-07 16:22:23.999580] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.394 [2024-06-07 16:22:23.999673] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.394 [2024-06-07 16:22:24.062565] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:57.394 [2024-06-07 16:22:24.062576] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:57.394 [2024-06-07 16:22:24.063690] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:57.394 [2024-06-07 16:22:24.064130] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:57.394 [2024-06-07 16:22:24.064223] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:57.967 16:22:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:57.967 16:22:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:57.967 16:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:58.911 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:59.172 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:59.172 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:59.172 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.172 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:59.172 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:59.172 Malloc1 00:13:59.172 16:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:59.469 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:59.730 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:59.731 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.731 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:59.731 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:59.991 Malloc2 00:13:59.991 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:59.991 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:00.252 16:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3014793 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 3014793 ']' 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 3014793 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3014793 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3014793' 00:14:00.513 killing process with pid 3014793 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 3014793 00:14:00.513 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 3014793 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:00.775 00:14:00.775 real 0m50.773s 00:14:00.775 user 3m21.256s 00:14:00.775 sys 0m3.013s 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:00.775 ************************************ 00:14:00.775 END TEST nvmf_vfio_user 00:14:00.775 ************************************ 00:14:00.775 16:22:27 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:00.775 16:22:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:00.775 16:22:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:00.775 16:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.775 ************************************ 00:14:00.775 START TEST nvmf_vfio_user_nvme_compliance 00:14:00.775 ************************************ 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:00.775 * Looking for test storage... 00:14:00.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3015543 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3015543' 00:14:00.775 Process pid: 3015543 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:00.775 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3015543 00:14:00.776 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 3015543 ']' 00:14:00.776 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.776 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:00.776 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.776 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:00.776 16:22:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:01.037 [2024-06-07 16:22:27.644839] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:14:01.037 [2024-06-07 16:22:27.644913] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.037 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.037 [2024-06-07 16:22:27.710142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.037 [2024-06-07 16:22:27.783776] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.037 [2024-06-07 16:22:27.783815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.037 [2024-06-07 16:22:27.783822] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.037 [2024-06-07 16:22:27.783828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.037 [2024-06-07 16:22:27.783834] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.037 [2024-06-07 16:22:27.783974] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.037 [2024-06-07 16:22:27.784096] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.037 [2024-06-07 16:22:27.784099] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.609 16:22:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:01.609 16:22:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:14:01.609 16:22:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.993 malloc0 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.993 16:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:02.993 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.993 00:14:02.993 00:14:02.993 CUnit - A unit testing framework for C - Version 2.1-3 00:14:02.993 http://cunit.sourceforge.net/ 00:14:02.993 00:14:02.993 00:14:02.993 Suite: nvme_compliance 00:14:02.993 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-07 16:22:29.661658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.993 [2024-06-07 16:22:29.662967] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:02.993 [2024-06-07 16:22:29.662977] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:02.993 [2024-06-07 16:22:29.662982] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:02.993 [2024-06-07 16:22:29.664677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.994 passed 00:14:02.994 Test: admin_identify_ctrlr_verify_fused ...[2024-06-07 16:22:29.760258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.994 [2024-06-07 16:22:29.763283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.994 passed 00:14:03.255 Test: admin_identify_ns ...[2024-06-07 16:22:29.858472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.255 [2024-06-07 16:22:29.922421] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:03.255 [2024-06-07 16:22:29.930419] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:03.255 [2024-06-07 16:22:29.951527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.255 passed 00:14:03.255 Test: admin_get_features_mandatory_features ...[2024-06-07 16:22:30.042124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.255 [2024-06-07 16:22:30.045143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.255 passed 00:14:03.515 Test: admin_get_features_optional_features ...[2024-06-07 16:22:30.138711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.515 [2024-06-07 16:22:30.141732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.515 passed 00:14:03.515 Test: admin_set_features_number_of_queues ...[2024-06-07 16:22:30.233832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.515 [2024-06-07 16:22:30.338509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.776 passed 00:14:03.776 Test: admin_get_log_page_mandatory_logs ...[2024-06-07 16:22:30.432543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.776 [2024-06-07 16:22:30.435558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.776 passed 00:14:03.776 Test: admin_get_log_page_with_lpo ...[2024-06-07 16:22:30.528648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.776 [2024-06-07 16:22:30.600414] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:03.776 [2024-06-07 16:22:30.613470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.036 passed 00:14:04.036 Test: fabric_property_get ...[2024-06-07 16:22:30.705078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.036 [2024-06-07 16:22:30.706301] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:04.036 [2024-06-07 16:22:30.708095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.036 passed 00:14:04.036 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-07 16:22:30.800786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.036 [2024-06-07 16:22:30.802036] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:04.036 [2024-06-07 16:22:30.803812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.036 passed 00:14:04.297 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-07 16:22:30.897653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.297 [2024-06-07 16:22:30.981410] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.297 [2024-06-07 16:22:30.997408] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.297 [2024-06-07 16:22:31.002498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.297 passed 00:14:04.297 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-07 16:22:31.094486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.297 [2024-06-07 16:22:31.095711] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:04.297 [2024-06-07 16:22:31.097503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.297 passed 00:14:04.557 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-07 16:22:31.192633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.557 [2024-06-07 16:22:31.268408] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:04.557 [2024-06-07 16:22:31.292409] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.557 [2024-06-07 16:22:31.297503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.557 passed 00:14:04.557 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-07 16:22:31.389112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.557 [2024-06-07 16:22:31.390326] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:04.557 [2024-06-07 16:22:31.390343] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:04.557 [2024-06-07 16:22:31.392132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.818 passed 00:14:04.818 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-07 16:22:31.485304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.818 [2024-06-07 16:22:31.575408] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:04.818 [2024-06-07 16:22:31.583407] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:04.818 [2024-06-07 16:22:31.591414] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:04.818 [2024-06-07 16:22:31.599407] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:04.818 [2024-06-07 16:22:31.628502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.818 passed 00:14:05.079 Test: admin_create_io_sq_verify_pc ...[2024-06-07 16:22:31.722481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.079 [2024-06-07 16:22:31.738415] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:05.079 [2024-06-07 16:22:31.755630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.079 passed 00:14:05.079 Test: admin_create_io_qp_max_qps ...[2024-06-07 16:22:31.850187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.464 [2024-06-07 16:22:32.938412] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:06.724 [2024-06-07 16:22:33.322051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.724 passed 00:14:06.724 Test: admin_create_io_sq_shared_cq ...[2024-06-07 16:22:33.415243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.724 [2024-06-07 16:22:33.545409] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:06.985 [2024-06-07 16:22:33.582466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.985 passed 00:14:06.985 00:14:06.985 Run Summary: Type Total Ran Passed Failed Inactive 00:14:06.985 suites 1 1 n/a 0 0 00:14:06.985 tests 18 18 18 0 0 00:14:06.985 asserts 360 360 360 0 n/a 00:14:06.985 00:14:06.985 Elapsed time = 1.643 seconds 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3015543 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 3015543 ']' 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 3015543 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3015543 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3015543' 00:14:06.985 killing process with pid 3015543 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 3015543 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 3015543 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:06.985 00:14:06.985 real 0m6.387s 00:14:06.985 user 0m18.256s 00:14:06.985 sys 0m0.456s 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:06.985 16:22:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.985 ************************************ 00:14:06.985 END TEST nvmf_vfio_user_nvme_compliance 00:14:06.985 ************************************ 00:14:07.248 16:22:33 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.248 16:22:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:07.248 16:22:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:07.248 16:22:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.248 ************************************ 00:14:07.248 START TEST nvmf_vfio_user_fuzz 00:14:07.248 ************************************ 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.248 * Looking for test storage... 00:14:07.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.248 16:22:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.248 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.248 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.248 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.248 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.248 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3016934 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3016934' 00:14:07.249 Process pid: 3016934 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3016934 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 3016934 ']' 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:07.249 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.191 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:08.191 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:14:08.191 16:22:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.133 malloc0 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:09.133 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:09.134 16:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:41.267 Fuzzing completed. Shutting down the fuzz application 00:14:41.267 00:14:41.267 Dumping successful admin opcodes: 00:14:41.267 8, 9, 10, 24, 00:14:41.267 Dumping successful io opcodes: 00:14:41.267 0, 00:14:41.267 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1134893, total successful commands: 4470, random_seed: 2567027584 00:14:41.267 NS: 0x200003a1ef00 admin qp, Total commands completed: 142684, total successful commands: 1160, random_seed: 1820581888 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3016934 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 3016934 ']' 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 3016934 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3016934 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3016934' 00:14:41.267 killing process with pid 3016934 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 3016934 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 3016934 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:41.267 00:14:41.267 real 0m33.657s 00:14:41.267 user 0m38.339s 00:14:41.267 sys 0m25.344s 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:41.267 16:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:41.267 ************************************ 00:14:41.267 END TEST nvmf_vfio_user_fuzz 00:14:41.267 ************************************ 00:14:41.268 16:23:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:41.268 16:23:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:41.268 16:23:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:41.268 16:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.268 ************************************ 00:14:41.268 START TEST nvmf_host_management 00:14:41.268 ************************************ 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:41.268 * Looking for test storage... 00:14:41.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.268 16:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.269 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:41.269 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:41.269 16:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.269 16:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:47.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:47.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.900 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:47.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:47.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.901 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:14:48.162 00:14:48.162 --- 10.0.0.2 ping statistics --- 00:14:48.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.162 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:14:48.162 00:14:48.162 --- 10.0.0.1 ping statistics --- 00:14:48.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.162 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3026969 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3026969 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 3026969 ']' 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:48.162 16:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.162 [2024-06-07 16:23:15.014143] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:14:48.162 [2024-06-07 16:23:15.014234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.424 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.424 [2024-06-07 16:23:15.106952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.424 [2024-06-07 16:23:15.201708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.424 [2024-06-07 16:23:15.201775] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.424 [2024-06-07 16:23:15.201784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.424 [2024-06-07 16:23:15.201791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.424 [2024-06-07 16:23:15.201798] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.424 [2024-06-07 16:23:15.201939] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.424 [2024-06-07 16:23:15.202105] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.424 [2024-06-07 16:23:15.202276] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.424 [2024-06-07 16:23:15.202275] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.995 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.995 [2024-06-07 16:23:15.844879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 Malloc0 00:14:49.256 [2024-06-07 16:23:15.908339] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3027291 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3027291 /var/tmp/bdevperf.sock 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 3027291 ']' 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:49.256 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:49.256 { 00:14:49.256 "params": { 00:14:49.256 "name": "Nvme$subsystem", 00:14:49.256 "trtype": "$TEST_TRANSPORT", 00:14:49.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.256 "adrfam": "ipv4", 00:14:49.256 "trsvcid": "$NVMF_PORT", 00:14:49.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.257 "hdgst": ${hdgst:-false}, 00:14:49.257 "ddgst": ${ddgst:-false} 00:14:49.257 }, 00:14:49.257 "method": "bdev_nvme_attach_controller" 00:14:49.257 } 00:14:49.257 EOF 00:14:49.257 )") 00:14:49.257 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:49.257 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:49.257 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:49.257 16:23:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:49.257 "params": { 00:14:49.257 "name": "Nvme0", 00:14:49.257 "trtype": "tcp", 00:14:49.257 "traddr": "10.0.0.2", 00:14:49.257 "adrfam": "ipv4", 00:14:49.257 "trsvcid": "4420", 00:14:49.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:49.257 "hdgst": false, 00:14:49.257 "ddgst": false 00:14:49.257 }, 00:14:49.257 "method": "bdev_nvme_attach_controller" 00:14:49.257 }' 00:14:49.257 [2024-06-07 16:23:16.008648] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:14:49.257 [2024-06-07 16:23:16.008697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027291 ] 00:14:49.257 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.257 [2024-06-07 16:23:16.067536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.517 [2024-06-07 16:23:16.132614] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.517 Running I/O for 10 seconds... 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:50.090 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.091 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:50.091 [2024-06-07 16:23:16.847380] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847443] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847448] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847453] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847458] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847462] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847467] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847471] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847475] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847480] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847484] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847489] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847493] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847497] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847501] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847506] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847510] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847514] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847518] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847523] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847527] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847531] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847535] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847540] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847548] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847552] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847556] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847561] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847565] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847569] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847573] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847577] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847582] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847586] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847590] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847595] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847600] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847604] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847609] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847613] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847618] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847622] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847631] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847635] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847640] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847644] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847649] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.847653] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93180 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.850913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.091 [2024-06-07 16:23:16.850954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.850968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.091 [2024-06-07 16:23:16.850976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.850984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.091 [2024-06-07 16:23:16.850991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.850999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.091 [2024-06-07 16:23:16.851006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43510 is same with the state(5) to be set 00:14:50.091 [2024-06-07 16:23:16.851052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.091 [2024-06-07 16:23:16.851063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.091 [2024-06-07 16:23:16.851085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.091 [2024-06-07 16:23:16.851102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.091 [2024-06-07 16:23:16.851119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.091 [2024-06-07 16:23:16.851136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.091 [2024-06-07 16:23:16.851154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.091 [2024-06-07 16:23:16.851164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.092 [2024-06-07 16:23:16.851861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.092 [2024-06-07 16:23:16.851869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.851888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.851909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.851929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.851947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.851965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.851983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.851992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.093 [2024-06-07 16:23:16.852218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.093 [2024-06-07 16:23:16.852277] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x127c4b0 was disconnected and freed. reset controller. 00:14:50.093 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.093 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:50.093 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.093 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:50.093 [2024-06-07 16:23:16.853493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:50.093 task offset: 108288 on job bdev=Nvme0n1 fails 00:14:50.093 00:14:50.093 Latency(us) 00:14:50.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.093 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:50.093 Job: Nvme0n1 ended in about 0.53 seconds with error 00:14:50.093 Verification LBA range: start 0x0 length 0x400 00:14:50.093 Nvme0n1 : 0.53 1598.54 99.91 120.93 0.00 36267.62 1727.15 31457.28 00:14:50.093 =================================================================================================================== 00:14:50.093 Total : 1598.54 99.91 120.93 0.00 36267.62 1727.15 31457.28 00:14:50.093 [2024-06-07 16:23:16.855593] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:50.093 [2024-06-07 16:23:16.855618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe43510 (9): Bad file descriptor 00:14:50.093 16:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.093 16:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:50.354 [2024-06-07 16:23:16.989653] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3027291 00:14:51.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3027291) - No such process 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:51.298 { 00:14:51.298 "params": { 00:14:51.298 "name": "Nvme$subsystem", 00:14:51.298 "trtype": "$TEST_TRANSPORT", 00:14:51.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:51.298 "adrfam": "ipv4", 00:14:51.298 "trsvcid": "$NVMF_PORT", 00:14:51.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:51.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:51.298 "hdgst": ${hdgst:-false}, 00:14:51.298 "ddgst": ${ddgst:-false} 00:14:51.298 }, 00:14:51.298 "method": "bdev_nvme_attach_controller" 00:14:51.298 } 00:14:51.298 EOF 00:14:51.298 )") 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:51.298 16:23:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:51.298 "params": { 00:14:51.298 "name": "Nvme0", 00:14:51.298 "trtype": "tcp", 00:14:51.298 "traddr": "10.0.0.2", 00:14:51.298 "adrfam": "ipv4", 00:14:51.298 "trsvcid": "4420", 00:14:51.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:51.298 "hdgst": false, 00:14:51.298 "ddgst": false 00:14:51.298 }, 00:14:51.298 "method": "bdev_nvme_attach_controller" 00:14:51.298 }' 00:14:51.298 [2024-06-07 16:23:17.929729] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:14:51.298 [2024-06-07 16:23:17.929783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027642 ] 00:14:51.298 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.298 [2024-06-07 16:23:17.988319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.298 [2024-06-07 16:23:18.050754] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.559 Running I/O for 1 seconds... 00:14:52.944 00:14:52.944 Latency(us) 00:14:52.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.944 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:52.944 Verification LBA range: start 0x0 length 0x400 00:14:52.944 Nvme0n1 : 1.04 1292.24 80.76 0.00 0.00 48764.00 12069.55 37792.43 00:14:52.944 =================================================================================================================== 00:14:52.944 Total : 1292.24 80.76 0.00 0.00 48764.00 12069.55 37792.43 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.944 rmmod nvme_tcp 00:14:52.944 rmmod nvme_fabrics 00:14:52.944 rmmod nvme_keyring 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3026969 ']' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3026969 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 3026969 ']' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 3026969 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3026969 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3026969' 00:14:52.944 killing process with pid 3026969 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 3026969 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 3026969 00:14:52.944 [2024-06-07 16:23:19.727590] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.944 16:23:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.490 16:23:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.490 16:23:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:55.490 00:14:55.490 real 0m14.185s 00:14:55.490 user 0m22.902s 00:14:55.490 sys 0m6.301s 00:14:55.490 16:23:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:55.490 16:23:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:55.491 ************************************ 00:14:55.491 END TEST nvmf_host_management 00:14:55.491 ************************************ 00:14:55.491 16:23:21 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:55.491 16:23:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:55.491 16:23:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:55.491 16:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.491 ************************************ 00:14:55.491 START TEST nvmf_lvol 00:14:55.491 ************************************ 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:55.491 * Looking for test storage... 00:14:55.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.491 16:23:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:02.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:02.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:02.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:02.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:15:02.077 00:15:02.077 --- 10.0.0.2 ping statistics --- 00:15:02.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.077 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:15:02.077 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:15:02.338 00:15:02.338 --- 10.0.0.1 ping statistics --- 00:15:02.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.338 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.338 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3032087 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3032087 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 3032087 ']' 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:02.339 16:23:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:02.339 [2024-06-07 16:23:29.024102] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:15:02.339 [2024-06-07 16:23:29.024167] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.339 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.339 [2024-06-07 16:23:29.097121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.339 [2024-06-07 16:23:29.173543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.339 [2024-06-07 16:23:29.173582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.339 [2024-06-07 16:23:29.173590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.339 [2024-06-07 16:23:29.173596] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.339 [2024-06-07 16:23:29.173602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.339 [2024-06-07 16:23:29.173748] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.339 [2024-06-07 16:23:29.173888] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.339 [2024-06-07 16:23:29.173890] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.281 16:23:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:03.281 [2024-06-07 16:23:29.990798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.281 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:03.541 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:03.541 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:03.541 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:03.541 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:03.802 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:04.063 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a178e966-e66b-4679-8018-f3ca133ad65d 00:15:04.063 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a178e966-e66b-4679-8018-f3ca133ad65d lvol 20 00:15:04.063 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d85fc6ce-80e9-49d7-93ea-64ef9642e59f 00:15:04.063 16:23:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:04.325 16:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d85fc6ce-80e9-49d7-93ea-64ef9642e59f 00:15:04.586 16:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:04.586 [2024-06-07 16:23:31.359983] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.586 16:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.847 16:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3032695 00:15:04.847 16:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:04.847 16:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:04.847 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.797 16:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d85fc6ce-80e9-49d7-93ea-64ef9642e59f MY_SNAPSHOT 00:15:06.162 16:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e25804f4-e6e7-4d17-8da5-d06fe5bff022 00:15:06.162 16:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d85fc6ce-80e9-49d7-93ea-64ef9642e59f 30 00:15:06.162 16:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e25804f4-e6e7-4d17-8da5-d06fe5bff022 MY_CLONE 00:15:06.423 16:23:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=15b9646c-82b3-4afd-acab-16e07138657f 00:15:06.423 16:23:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 15b9646c-82b3-4afd-acab-16e07138657f 00:15:06.683 16:23:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3032695 00:15:16.687 Initializing NVMe Controllers 00:15:16.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:16.687 Controller IO queue size 128, less than required. 00:15:16.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:16.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:16.687 Initialization complete. Launching workers. 00:15:16.687 ======================================================== 00:15:16.687 Latency(us) 00:15:16.687 Device Information : IOPS MiB/s Average min max 00:15:16.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17255.90 67.41 7418.98 1315.22 51151.53 00:15:16.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11855.10 46.31 10799.88 3920.38 48205.02 00:15:16.687 ======================================================== 00:15:16.687 Total : 29111.00 113.71 8795.81 1315.22 51151.53 00:15:16.687 00:15:16.687 16:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:16.687 16:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d85fc6ce-80e9-49d7-93ea-64ef9642e59f 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a178e966-e66b-4679-8018-f3ca133ad65d 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.687 rmmod nvme_tcp 00:15:16.687 rmmod nvme_fabrics 00:15:16.687 rmmod nvme_keyring 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3032087 ']' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3032087 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 3032087 ']' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 3032087 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3032087 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3032087' 00:15:16.687 killing process with pid 3032087 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 3032087 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 3032087 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.687 16:23:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.075 00:15:18.075 real 0m22.751s 00:15:18.075 user 1m3.209s 00:15:18.075 sys 0m7.500s 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:18.075 ************************************ 00:15:18.075 END TEST nvmf_lvol 00:15:18.075 ************************************ 00:15:18.075 16:23:44 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:18.075 16:23:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:18.075 16:23:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:18.075 16:23:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.075 ************************************ 00:15:18.075 START TEST nvmf_lvs_grow 00:15:18.075 ************************************ 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:18.075 * Looking for test storage... 00:15:18.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.075 16:23:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:18.076 16:23:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:26.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:26.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:26.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:26.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.240 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.241 16:23:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:15:26.241 00:15:26.241 --- 10.0.0.2 ping statistics --- 00:15:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.241 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:15:26.241 00:15:26.241 --- 10.0.0.1 ping statistics --- 00:15:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.241 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3039027 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3039027 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 3039027 ']' 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:26.241 [2024-06-07 16:23:52.141496] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:15:26.241 [2024-06-07 16:23:52.141543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.241 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.241 [2024-06-07 16:23:52.205934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.241 [2024-06-07 16:23:52.268972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.241 [2024-06-07 16:23:52.269009] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.241 [2024-06-07 16:23:52.269016] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.241 [2024-06-07 16:23:52.269023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.241 [2024-06-07 16:23:52.269028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.241 [2024-06-07 16:23:52.269048] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.241 16:23:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.502 [2024-06-07 16:23:53.096140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:26.502 ************************************ 00:15:26.502 START TEST lvs_grow_clean 00:15:26.502 ************************************ 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:26.502 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:26.763 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:26.763 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:26.763 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:26.763 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:26.763 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:27.032 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:27.032 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:27.032 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 lvol 150 00:15:27.032 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ea34fb52-a8f0-4e49-ac93-bb94b431ce43 00:15:27.032 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.032 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:27.292 [2024-06-07 16:23:53.971388] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:27.292 [2024-06-07 16:23:53.971444] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:27.292 true 00:15:27.292 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:27.292 16:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:27.292 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:27.292 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:27.552 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea34fb52-a8f0-4e49-ac93-bb94b431ce43 00:15:27.813 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:27.813 [2024-06-07 16:23:54.537112] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.813 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3039424 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3039424 /var/tmp/bdevperf.sock 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 3039424 ']' 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:28.074 16:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:28.074 [2024-06-07 16:23:54.751866] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:15:28.074 [2024-06-07 16:23:54.751915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039424 ] 00:15:28.074 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.074 [2024-06-07 16:23:54.827258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.074 [2024-06-07 16:23:54.891373] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.017 16:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:29.017 16:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:15:29.017 16:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:29.278 Nvme0n1 00:15:29.278 16:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:29.278 [ 00:15:29.278 { 00:15:29.278 "name": "Nvme0n1", 00:15:29.278 "aliases": [ 00:15:29.278 "ea34fb52-a8f0-4e49-ac93-bb94b431ce43" 00:15:29.278 ], 00:15:29.278 "product_name": "NVMe disk", 00:15:29.278 "block_size": 4096, 00:15:29.278 "num_blocks": 38912, 00:15:29.278 "uuid": "ea34fb52-a8f0-4e49-ac93-bb94b431ce43", 00:15:29.278 "assigned_rate_limits": { 00:15:29.278 "rw_ios_per_sec": 0, 00:15:29.278 "rw_mbytes_per_sec": 0, 00:15:29.278 "r_mbytes_per_sec": 0, 00:15:29.278 "w_mbytes_per_sec": 0 00:15:29.278 }, 00:15:29.278 "claimed": false, 00:15:29.278 "zoned": false, 00:15:29.278 "supported_io_types": { 00:15:29.278 "read": true, 00:15:29.278 "write": true, 00:15:29.278 "unmap": true, 00:15:29.278 "write_zeroes": true, 00:15:29.278 "flush": true, 00:15:29.278 "reset": true, 00:15:29.278 "compare": true, 00:15:29.278 "compare_and_write": true, 00:15:29.278 "abort": true, 00:15:29.278 "nvme_admin": true, 00:15:29.278 "nvme_io": true 00:15:29.278 }, 00:15:29.278 "memory_domains": [ 00:15:29.278 { 00:15:29.278 "dma_device_id": "system", 00:15:29.278 "dma_device_type": 1 00:15:29.278 } 00:15:29.278 ], 00:15:29.278 "driver_specific": { 00:15:29.278 "nvme": [ 00:15:29.278 { 00:15:29.278 "trid": { 00:15:29.278 "trtype": "TCP", 00:15:29.278 "adrfam": "IPv4", 00:15:29.278 "traddr": "10.0.0.2", 00:15:29.278 "trsvcid": "4420", 00:15:29.278 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:29.278 }, 00:15:29.278 "ctrlr_data": { 00:15:29.278 "cntlid": 1, 00:15:29.278 "vendor_id": "0x8086", 00:15:29.278 "model_number": "SPDK bdev Controller", 00:15:29.278 "serial_number": "SPDK0", 00:15:29.278 "firmware_revision": "24.09", 00:15:29.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:29.278 "oacs": { 00:15:29.278 "security": 0, 00:15:29.278 "format": 0, 00:15:29.278 "firmware": 0, 00:15:29.278 "ns_manage": 0 00:15:29.278 }, 00:15:29.278 "multi_ctrlr": true, 00:15:29.278 "ana_reporting": false 00:15:29.278 }, 00:15:29.278 "vs": { 00:15:29.278 "nvme_version": "1.3" 00:15:29.278 }, 00:15:29.278 "ns_data": { 00:15:29.278 "id": 1, 00:15:29.278 "can_share": true 00:15:29.278 } 00:15:29.278 } 00:15:29.278 ], 00:15:29.278 "mp_policy": "active_passive" 00:15:29.278 } 00:15:29.278 } 00:15:29.278 ] 00:15:29.278 16:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3039758 00:15:29.278 16:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:29.278 16:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:29.538 Running I/O for 10 seconds... 00:15:30.482 Latency(us) 00:15:30.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.482 Nvme0n1 : 1.00 17994.00 70.29 0.00 0.00 0.00 0.00 0.00 00:15:30.482 =================================================================================================================== 00:15:30.482 Total : 17994.00 70.29 0.00 0.00 0.00 0.00 0.00 00:15:30.482 00:15:31.424 16:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:31.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.424 Nvme0n1 : 2.00 18149.00 70.89 0.00 0.00 0.00 0.00 0.00 00:15:31.424 =================================================================================================================== 00:15:31.424 Total : 18149.00 70.89 0.00 0.00 0.00 0.00 0.00 00:15:31.424 00:15:31.424 true 00:15:31.424 16:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:31.424 16:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:31.685 16:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:31.685 16:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:31.685 16:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3039758 00:15:32.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.627 Nvme0n1 : 3.00 18200.33 71.10 0.00 0.00 0.00 0.00 0.00 00:15:32.627 =================================================================================================================== 00:15:32.627 Total : 18200.33 71.10 0.00 0.00 0.00 0.00 0.00 00:15:32.627 00:15:33.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.618 Nvme0n1 : 4.00 18226.50 71.20 0.00 0.00 0.00 0.00 0.00 00:15:33.618 =================================================================================================================== 00:15:33.618 Total : 18226.50 71.20 0.00 0.00 0.00 0.00 0.00 00:15:33.618 00:15:34.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.559 Nvme0n1 : 5.00 18241.80 71.26 0.00 0.00 0.00 0.00 0.00 00:15:34.559 =================================================================================================================== 00:15:34.559 Total : 18241.80 71.26 0.00 0.00 0.00 0.00 0.00 00:15:34.559 00:15:35.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.499 Nvme0n1 : 6.00 18252.17 71.30 0.00 0.00 0.00 0.00 0.00 00:15:35.499 =================================================================================================================== 00:15:35.499 Total : 18252.17 71.30 0.00 0.00 0.00 0.00 0.00 00:15:35.499 00:15:36.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.440 Nvme0n1 : 7.00 18271.14 71.37 0.00 0.00 0.00 0.00 0.00 00:15:36.440 =================================================================================================================== 00:15:36.440 Total : 18271.14 71.37 0.00 0.00 0.00 0.00 0.00 00:15:36.440 00:15:37.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.380 Nvme0n1 : 8.00 18289.25 71.44 0.00 0.00 0.00 0.00 0.00 00:15:37.380 =================================================================================================================== 00:15:37.380 Total : 18289.25 71.44 0.00 0.00 0.00 0.00 0.00 00:15:37.380 00:15:38.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.764 Nvme0n1 : 9.00 18297.89 71.48 0.00 0.00 0.00 0.00 0.00 00:15:38.764 =================================================================================================================== 00:15:38.764 Total : 18297.89 71.48 0.00 0.00 0.00 0.00 0.00 00:15:38.764 00:15:39.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.336 Nvme0n1 : 10.00 18311.40 71.53 0.00 0.00 0.00 0.00 0.00 00:15:39.336 =================================================================================================================== 00:15:39.336 Total : 18311.40 71.53 0.00 0.00 0.00 0.00 0.00 00:15:39.336 00:15:39.597 00:15:39.597 Latency(us) 00:15:39.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.597 Nvme0n1 : 10.01 18311.20 71.53 0.00 0.00 6987.53 4314.45 16274.77 00:15:39.597 =================================================================================================================== 00:15:39.597 Total : 18311.20 71.53 0.00 0.00 6987.53 4314.45 16274.77 00:15:39.597 0 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3039424 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 3039424 ']' 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 3039424 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3039424 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3039424' 00:15:39.597 killing process with pid 3039424 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 3039424 00:15:39.597 Received shutdown signal, test time was about 10.000000 seconds 00:15:39.597 00:15:39.597 Latency(us) 00:15:39.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.597 =================================================================================================================== 00:15:39.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 3039424 00:15:39.597 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.857 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:40.117 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:40.117 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:40.117 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:40.117 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:40.117 16:24:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:40.378 [2024-06-07 16:24:07.050620] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:40.378 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:40.378 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:15:40.378 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:40.378 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.378 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:40.378 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.379 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:40.379 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.379 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:40.379 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.379 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:40.379 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:40.640 request: 00:15:40.640 { 00:15:40.640 "uuid": "e0318fa7-8fd1-4712-8d45-45d4950e24d3", 00:15:40.640 "method": "bdev_lvol_get_lvstores", 00:15:40.640 "req_id": 1 00:15:40.640 } 00:15:40.640 Got JSON-RPC error response 00:15:40.640 response: 00:15:40.640 { 00:15:40.640 "code": -19, 00:15:40.640 "message": "No such device" 00:15:40.640 } 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:40.640 aio_bdev 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ea34fb52-a8f0-4e49-ac93-bb94b431ce43 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=ea34fb52-a8f0-4e49-ac93-bb94b431ce43 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:40.640 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:40.900 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ea34fb52-a8f0-4e49-ac93-bb94b431ce43 -t 2000 00:15:40.900 [ 00:15:40.900 { 00:15:40.900 "name": "ea34fb52-a8f0-4e49-ac93-bb94b431ce43", 00:15:40.900 "aliases": [ 00:15:40.900 "lvs/lvol" 00:15:40.900 ], 00:15:40.900 "product_name": "Logical Volume", 00:15:40.900 "block_size": 4096, 00:15:40.900 "num_blocks": 38912, 00:15:40.900 "uuid": "ea34fb52-a8f0-4e49-ac93-bb94b431ce43", 00:15:40.900 "assigned_rate_limits": { 00:15:40.900 "rw_ios_per_sec": 0, 00:15:40.900 "rw_mbytes_per_sec": 0, 00:15:40.900 "r_mbytes_per_sec": 0, 00:15:40.900 "w_mbytes_per_sec": 0 00:15:40.900 }, 00:15:40.900 "claimed": false, 00:15:40.900 "zoned": false, 00:15:40.900 "supported_io_types": { 00:15:40.900 "read": true, 00:15:40.900 "write": true, 00:15:40.900 "unmap": true, 00:15:40.900 "write_zeroes": true, 00:15:40.900 "flush": false, 00:15:40.900 "reset": true, 00:15:40.900 "compare": false, 00:15:40.900 "compare_and_write": false, 00:15:40.900 "abort": false, 00:15:40.900 "nvme_admin": false, 00:15:40.900 "nvme_io": false 00:15:40.900 }, 00:15:40.900 "driver_specific": { 00:15:40.900 "lvol": { 00:15:40.900 "lvol_store_uuid": "e0318fa7-8fd1-4712-8d45-45d4950e24d3", 00:15:40.900 "base_bdev": "aio_bdev", 00:15:40.900 "thin_provision": false, 00:15:40.900 "num_allocated_clusters": 38, 00:15:40.900 "snapshot": false, 00:15:40.900 "clone": false, 00:15:40.900 "esnap_clone": false 00:15:40.900 } 00:15:40.900 } 00:15:40.900 } 00:15:40.900 ] 00:15:40.900 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:15:40.900 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:40.900 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:41.160 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:41.160 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:41.160 16:24:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:41.160 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:41.160 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea34fb52-a8f0-4e49-ac93-bb94b431ce43 00:15:41.420 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0318fa7-8fd1-4712-8d45-45d4950e24d3 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:41.680 00:15:41.680 real 0m15.321s 00:15:41.680 user 0m15.097s 00:15:41.680 sys 0m1.242s 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:41.680 ************************************ 00:15:41.680 END TEST lvs_grow_clean 00:15:41.680 ************************************ 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:41.680 16:24:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:41.940 ************************************ 00:15:41.940 START TEST lvs_grow_dirty 00:15:41.940 ************************************ 00:15:41.940 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:15:41.940 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:41.940 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:41.940 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:41.940 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:41.940 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:41.941 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:41.941 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:41.941 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:41.941 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:41.941 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:41.941 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:42.201 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:42.201 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:42.201 16:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:42.462 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:42.462 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:42.462 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6405b83f-b82e-4898-8d85-2e7e38330e98 lvol 150 00:15:42.462 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:42.462 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:42.462 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:42.724 [2024-06-07 16:24:09.367980] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:42.724 [2024-06-07 16:24:09.368036] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:42.724 true 00:15:42.724 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:42.724 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:42.724 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:42.724 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:42.985 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:42.985 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:43.245 [2024-06-07 16:24:09.969796] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.245 16:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3042852 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3042852 /var/tmp/bdevperf.sock 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 3042852 ']' 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:43.505 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:43.505 [2024-06-07 16:24:10.169935] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:15:43.505 [2024-06-07 16:24:10.169982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042852 ] 00:15:43.505 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.505 [2024-06-07 16:24:10.244184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.505 [2024-06-07 16:24:10.298496] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.447 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:44.447 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:44.447 16:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:44.447 Nvme0n1 00:15:44.447 16:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:44.708 [ 00:15:44.708 { 00:15:44.708 "name": "Nvme0n1", 00:15:44.708 "aliases": [ 00:15:44.708 "52fd524c-fe9b-448a-9144-a56e4f1d322c" 00:15:44.708 ], 00:15:44.708 "product_name": "NVMe disk", 00:15:44.708 "block_size": 4096, 00:15:44.708 "num_blocks": 38912, 00:15:44.708 "uuid": "52fd524c-fe9b-448a-9144-a56e4f1d322c", 00:15:44.708 "assigned_rate_limits": { 00:15:44.708 "rw_ios_per_sec": 0, 00:15:44.708 "rw_mbytes_per_sec": 0, 00:15:44.708 "r_mbytes_per_sec": 0, 00:15:44.708 "w_mbytes_per_sec": 0 00:15:44.708 }, 00:15:44.708 "claimed": false, 00:15:44.708 "zoned": false, 00:15:44.709 "supported_io_types": { 00:15:44.709 "read": true, 00:15:44.709 "write": true, 00:15:44.709 "unmap": true, 00:15:44.709 "write_zeroes": true, 00:15:44.709 "flush": true, 00:15:44.709 "reset": true, 00:15:44.709 "compare": true, 00:15:44.709 "compare_and_write": true, 00:15:44.709 "abort": true, 00:15:44.709 "nvme_admin": true, 00:15:44.709 "nvme_io": true 00:15:44.709 }, 00:15:44.709 "memory_domains": [ 00:15:44.709 { 00:15:44.709 "dma_device_id": "system", 00:15:44.709 "dma_device_type": 1 00:15:44.709 } 00:15:44.709 ], 00:15:44.709 "driver_specific": { 00:15:44.709 "nvme": [ 00:15:44.709 { 00:15:44.709 "trid": { 00:15:44.709 "trtype": "TCP", 00:15:44.709 "adrfam": "IPv4", 00:15:44.709 "traddr": "10.0.0.2", 00:15:44.709 "trsvcid": "4420", 00:15:44.709 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:44.709 }, 00:15:44.709 "ctrlr_data": { 00:15:44.709 "cntlid": 1, 00:15:44.709 "vendor_id": "0x8086", 00:15:44.709 "model_number": "SPDK bdev Controller", 00:15:44.709 "serial_number": "SPDK0", 00:15:44.709 "firmware_revision": "24.09", 00:15:44.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:44.709 "oacs": { 00:15:44.709 "security": 0, 00:15:44.709 "format": 0, 00:15:44.709 "firmware": 0, 00:15:44.709 "ns_manage": 0 00:15:44.709 }, 00:15:44.709 "multi_ctrlr": true, 00:15:44.709 "ana_reporting": false 00:15:44.709 }, 00:15:44.709 "vs": { 00:15:44.709 "nvme_version": "1.3" 00:15:44.709 }, 00:15:44.709 "ns_data": { 00:15:44.709 "id": 1, 00:15:44.709 "can_share": true 00:15:44.709 } 00:15:44.709 } 00:15:44.709 ], 00:15:44.709 "mp_policy": "active_passive" 00:15:44.709 } 00:15:44.709 } 00:15:44.709 ] 00:15:44.709 16:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.709 16:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3043377 00:15:44.709 16:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:44.709 Running I/O for 10 seconds... 00:15:45.652 Latency(us) 00:15:45.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.652 Nvme0n1 : 1.00 17484.00 68.30 0.00 0.00 0.00 0.00 0.00 00:15:45.652 =================================================================================================================== 00:15:45.652 Total : 17484.00 68.30 0.00 0.00 0.00 0.00 0.00 00:15:45.652 00:15:46.595 16:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:46.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.595 Nvme0n1 : 2.00 17618.00 68.82 0.00 0.00 0.00 0.00 0.00 00:15:46.595 =================================================================================================================== 00:15:46.595 Total : 17618.00 68.82 0.00 0.00 0.00 0.00 0.00 00:15:46.595 00:15:46.857 true 00:15:46.857 16:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:46.857 16:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:46.857 16:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:46.857 16:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:46.857 16:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3043377 00:15:47.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.800 Nvme0n1 : 3.00 17665.33 69.01 0.00 0.00 0.00 0.00 0.00 00:15:47.800 =================================================================================================================== 00:15:47.800 Total : 17665.33 69.01 0.00 0.00 0.00 0.00 0.00 00:15:47.800 00:15:48.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.741 Nvme0n1 : 4.00 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:15:48.741 =================================================================================================================== 00:15:48.741 Total : 17705.00 69.16 0.00 0.00 0.00 0.00 0.00 00:15:48.741 00:15:49.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.721 Nvme0n1 : 5.00 17732.00 69.27 0.00 0.00 0.00 0.00 0.00 00:15:49.721 =================================================================================================================== 00:15:49.721 Total : 17732.00 69.27 0.00 0.00 0.00 0.00 0.00 00:15:49.721 00:15:50.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.663 Nvme0n1 : 6.00 17754.00 69.35 0.00 0.00 0.00 0.00 0.00 00:15:50.663 =================================================================================================================== 00:15:50.663 Total : 17754.00 69.35 0.00 0.00 0.00 0.00 0.00 00:15:50.663 00:15:51.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.606 Nvme0n1 : 7.00 17772.00 69.42 0.00 0.00 0.00 0.00 0.00 00:15:51.606 =================================================================================================================== 00:15:51.606 Total : 17772.00 69.42 0.00 0.00 0.00 0.00 0.00 00:15:51.606 00:15:52.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.991 Nvme0n1 : 8.00 17786.50 69.48 0.00 0.00 0.00 0.00 0.00 00:15:52.991 =================================================================================================================== 00:15:52.991 Total : 17786.50 69.48 0.00 0.00 0.00 0.00 0.00 00:15:52.991 00:15:53.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.932 Nvme0n1 : 9.00 17796.00 69.52 0.00 0.00 0.00 0.00 0.00 00:15:53.932 =================================================================================================================== 00:15:53.932 Total : 17796.00 69.52 0.00 0.00 0.00 0.00 0.00 00:15:53.932 00:15:54.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.874 Nvme0n1 : 10.00 17806.00 69.55 0.00 0.00 0.00 0.00 0.00 00:15:54.874 =================================================================================================================== 00:15:54.874 Total : 17806.00 69.55 0.00 0.00 0.00 0.00 0.00 00:15:54.874 00:15:54.874 00:15:54.874 Latency(us) 00:15:54.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.874 Nvme0n1 : 10.01 17806.38 69.56 0.00 0.00 7183.66 5980.16 16274.77 00:15:54.874 =================================================================================================================== 00:15:54.874 Total : 17806.38 69.56 0.00 0.00 7183.66 5980.16 16274.77 00:15:54.874 0 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3042852 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 3042852 ']' 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 3042852 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3042852 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3042852' 00:15:54.875 killing process with pid 3042852 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 3042852 00:15:54.875 Received shutdown signal, test time was about 10.000000 seconds 00:15:54.875 00:15:54.875 Latency(us) 00:15:54.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.875 =================================================================================================================== 00:15:54.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 3042852 00:15:54.875 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.136 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:55.397 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:55.397 16:24:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3039027 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3039027 00:15:55.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3039027 Killed "${NVMF_APP[@]}" "$@" 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3045440 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3045440 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 3045440 ']' 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:55.397 16:24:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 [2024-06-07 16:24:22.270269] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:15:55.658 [2024-06-07 16:24:22.270326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.658 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.658 [2024-06-07 16:24:22.339383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.658 [2024-06-07 16:24:22.409168] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.658 [2024-06-07 16:24:22.409205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.658 [2024-06-07 16:24:22.409213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.658 [2024-06-07 16:24:22.409219] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.658 [2024-06-07 16:24:22.409225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.658 [2024-06-07 16:24:22.409250] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.230 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:56.491 [2024-06-07 16:24:23.214364] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:56.491 [2024-06-07 16:24:23.214466] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:56.491 [2024-06-07 16:24:23.214496] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:56.491 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:56.751 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52fd524c-fe9b-448a-9144-a56e4f1d322c -t 2000 00:15:56.752 [ 00:15:56.752 { 00:15:56.752 "name": "52fd524c-fe9b-448a-9144-a56e4f1d322c", 00:15:56.752 "aliases": [ 00:15:56.752 "lvs/lvol" 00:15:56.752 ], 00:15:56.752 "product_name": "Logical Volume", 00:15:56.752 "block_size": 4096, 00:15:56.752 "num_blocks": 38912, 00:15:56.752 "uuid": "52fd524c-fe9b-448a-9144-a56e4f1d322c", 00:15:56.752 "assigned_rate_limits": { 00:15:56.752 "rw_ios_per_sec": 0, 00:15:56.752 "rw_mbytes_per_sec": 0, 00:15:56.752 "r_mbytes_per_sec": 0, 00:15:56.752 "w_mbytes_per_sec": 0 00:15:56.752 }, 00:15:56.752 "claimed": false, 00:15:56.752 "zoned": false, 00:15:56.752 "supported_io_types": { 00:15:56.752 "read": true, 00:15:56.752 "write": true, 00:15:56.752 "unmap": true, 00:15:56.752 "write_zeroes": true, 00:15:56.752 "flush": false, 00:15:56.752 "reset": true, 00:15:56.752 "compare": false, 00:15:56.752 "compare_and_write": false, 00:15:56.752 "abort": false, 00:15:56.752 "nvme_admin": false, 00:15:56.752 "nvme_io": false 00:15:56.752 }, 00:15:56.752 "driver_specific": { 00:15:56.752 "lvol": { 00:15:56.752 "lvol_store_uuid": "6405b83f-b82e-4898-8d85-2e7e38330e98", 00:15:56.752 "base_bdev": "aio_bdev", 00:15:56.752 "thin_provision": false, 00:15:56.752 "num_allocated_clusters": 38, 00:15:56.752 "snapshot": false, 00:15:56.752 "clone": false, 00:15:56.752 "esnap_clone": false 00:15:56.752 } 00:15:56.752 } 00:15:56.752 } 00:15:56.752 ] 00:15:56.752 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:56.752 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:56.752 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:57.012 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:57.012 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:57.012 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:57.012 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:57.012 16:24:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:57.273 [2024-06-07 16:24:23.978259] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:57.273 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:57.534 request: 00:15:57.534 { 00:15:57.534 "uuid": "6405b83f-b82e-4898-8d85-2e7e38330e98", 00:15:57.534 "method": "bdev_lvol_get_lvstores", 00:15:57.534 "req_id": 1 00:15:57.534 } 00:15:57.534 Got JSON-RPC error response 00:15:57.534 response: 00:15:57.534 { 00:15:57.534 "code": -19, 00:15:57.534 "message": "No such device" 00:15:57.534 } 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:57.534 aio_bdev 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:57.534 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:57.535 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:57.535 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:57.535 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:57.535 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:57.796 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52fd524c-fe9b-448a-9144-a56e4f1d322c -t 2000 00:15:57.796 [ 00:15:57.796 { 00:15:57.796 "name": "52fd524c-fe9b-448a-9144-a56e4f1d322c", 00:15:57.796 "aliases": [ 00:15:57.796 "lvs/lvol" 00:15:57.796 ], 00:15:57.796 "product_name": "Logical Volume", 00:15:57.796 "block_size": 4096, 00:15:57.796 "num_blocks": 38912, 00:15:57.796 "uuid": "52fd524c-fe9b-448a-9144-a56e4f1d322c", 00:15:57.796 "assigned_rate_limits": { 00:15:57.796 "rw_ios_per_sec": 0, 00:15:57.796 "rw_mbytes_per_sec": 0, 00:15:57.796 "r_mbytes_per_sec": 0, 00:15:57.796 "w_mbytes_per_sec": 0 00:15:57.796 }, 00:15:57.796 "claimed": false, 00:15:57.796 "zoned": false, 00:15:57.796 "supported_io_types": { 00:15:57.796 "read": true, 00:15:57.796 "write": true, 00:15:57.796 "unmap": true, 00:15:57.796 "write_zeroes": true, 00:15:57.796 "flush": false, 00:15:57.796 "reset": true, 00:15:57.796 "compare": false, 00:15:57.796 "compare_and_write": false, 00:15:57.796 "abort": false, 00:15:57.796 "nvme_admin": false, 00:15:57.796 "nvme_io": false 00:15:57.796 }, 00:15:57.796 "driver_specific": { 00:15:57.796 "lvol": { 00:15:57.796 "lvol_store_uuid": "6405b83f-b82e-4898-8d85-2e7e38330e98", 00:15:57.796 "base_bdev": "aio_bdev", 00:15:57.796 "thin_provision": false, 00:15:57.796 "num_allocated_clusters": 38, 00:15:57.796 "snapshot": false, 00:15:57.796 "clone": false, 00:15:57.796 "esnap_clone": false 00:15:57.796 } 00:15:57.796 } 00:15:57.796 } 00:15:57.796 ] 00:15:57.796 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:57.796 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:57.796 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:58.058 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:58.058 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:58.058 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:58.318 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:58.318 16:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52fd524c-fe9b-448a-9144-a56e4f1d322c 00:15:58.318 16:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6405b83f-b82e-4898-8d85-2e7e38330e98 00:15:58.580 16:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:58.580 16:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:58.841 00:15:58.841 real 0m16.881s 00:15:58.841 user 0m44.214s 00:15:58.841 sys 0m2.906s 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:58.841 ************************************ 00:15:58.841 END TEST lvs_grow_dirty 00:15:58.841 ************************************ 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:58.841 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:58.842 nvmf_trace.0 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.842 rmmod nvme_tcp 00:15:58.842 rmmod nvme_fabrics 00:15:58.842 rmmod nvme_keyring 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3045440 ']' 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3045440 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 3045440 ']' 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 3045440 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3045440 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3045440' 00:15:58.842 killing process with pid 3045440 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 3045440 00:15:58.842 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 3045440 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.102 16:24:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.650 16:24:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.650 00:16:01.650 real 0m43.156s 00:16:01.650 user 1m5.186s 00:16:01.650 sys 0m10.000s 00:16:01.650 16:24:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:01.650 16:24:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:01.650 ************************************ 00:16:01.650 END TEST nvmf_lvs_grow 00:16:01.650 ************************************ 00:16:01.650 16:24:27 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:01.650 16:24:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:01.650 16:24:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:01.650 16:24:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.650 ************************************ 00:16:01.650 START TEST nvmf_bdev_io_wait 00:16:01.650 ************************************ 00:16:01.650 16:24:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:01.650 * Looking for test storage... 00:16:01.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:01.650 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.651 16:24:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:08.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:08.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:08.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:08.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:08.240 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.241 16:24:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:08.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:16:08.502 00:16:08.502 --- 10.0.0.2 ping statistics --- 00:16:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.502 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:16:08.502 00:16:08.502 --- 10.0.0.1 ping statistics --- 00:16:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.502 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.502 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3050394 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3050394 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 3050394 ']' 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:08.503 16:24:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:08.764 [2024-06-07 16:24:35.378124] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:08.764 [2024-06-07 16:24:35.378187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.764 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.764 [2024-06-07 16:24:35.450135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.764 [2024-06-07 16:24:35.524992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.764 [2024-06-07 16:24:35.525034] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.764 [2024-06-07 16:24:35.525042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.764 [2024-06-07 16:24:35.525049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.764 [2024-06-07 16:24:35.525055] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.764 [2024-06-07 16:24:35.525193] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.764 [2024-06-07 16:24:35.525308] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.764 [2024-06-07 16:24:35.525464] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.764 [2024-06-07 16:24:35.525464] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.364 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.625 [2024-06-07 16:24:36.261453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.625 Malloc0 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:09.625 [2024-06-07 16:24:36.329805] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3050523 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3050525 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.625 { 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme$subsystem", 00:16:09.625 "trtype": "$TEST_TRANSPORT", 00:16:09.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "$NVMF_PORT", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.625 "hdgst": ${hdgst:-false}, 00:16:09.625 "ddgst": ${ddgst:-false} 00:16:09.625 }, 00:16:09.625 "method": "bdev_nvme_attach_controller" 00:16:09.625 } 00:16:09.625 EOF 00:16:09.625 )") 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3050527 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3050530 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.625 { 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme$subsystem", 00:16:09.625 "trtype": "$TEST_TRANSPORT", 00:16:09.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "$NVMF_PORT", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.625 "hdgst": ${hdgst:-false}, 00:16:09.625 "ddgst": ${ddgst:-false} 00:16:09.625 }, 00:16:09.625 "method": "bdev_nvme_attach_controller" 00:16:09.625 } 00:16:09.625 EOF 00:16:09.625 )") 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.625 { 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme$subsystem", 00:16:09.625 "trtype": "$TEST_TRANSPORT", 00:16:09.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "$NVMF_PORT", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.625 "hdgst": ${hdgst:-false}, 00:16:09.625 "ddgst": ${ddgst:-false} 00:16:09.625 }, 00:16:09.625 "method": "bdev_nvme_attach_controller" 00:16:09.625 } 00:16:09.625 EOF 00:16:09.625 )") 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.625 { 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme$subsystem", 00:16:09.625 "trtype": "$TEST_TRANSPORT", 00:16:09.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "$NVMF_PORT", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.625 "hdgst": ${hdgst:-false}, 00:16:09.625 "ddgst": ${ddgst:-false} 00:16:09.625 }, 00:16:09.625 "method": "bdev_nvme_attach_controller" 00:16:09.625 } 00:16:09.625 EOF 00:16:09.625 )") 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3050523 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme1", 00:16:09.625 "trtype": "tcp", 00:16:09.625 "traddr": "10.0.0.2", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "4420", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.625 "hdgst": false, 00:16:09.625 "ddgst": false 00:16:09.625 }, 00:16:09.625 "method": "bdev_nvme_attach_controller" 00:16:09.625 }' 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme1", 00:16:09.625 "trtype": "tcp", 00:16:09.625 "traddr": "10.0.0.2", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "4420", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.625 "hdgst": false, 00:16:09.625 "ddgst": false 00:16:09.625 }, 00:16:09.625 "method": "bdev_nvme_attach_controller" 00:16:09.625 }' 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:09.625 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.625 "params": { 00:16:09.625 "name": "Nvme1", 00:16:09.625 "trtype": "tcp", 00:16:09.625 "traddr": "10.0.0.2", 00:16:09.625 "adrfam": "ipv4", 00:16:09.625 "trsvcid": "4420", 00:16:09.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.626 "hdgst": false, 00:16:09.626 "ddgst": false 00:16:09.626 }, 00:16:09.626 "method": "bdev_nvme_attach_controller" 00:16:09.626 }' 00:16:09.626 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:09.626 16:24:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.626 "params": { 00:16:09.626 "name": "Nvme1", 00:16:09.626 "trtype": "tcp", 00:16:09.626 "traddr": "10.0.0.2", 00:16:09.626 "adrfam": "ipv4", 00:16:09.626 "trsvcid": "4420", 00:16:09.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.626 "hdgst": false, 00:16:09.626 "ddgst": false 00:16:09.626 }, 00:16:09.626 "method": "bdev_nvme_attach_controller" 00:16:09.626 }' 00:16:09.626 [2024-06-07 16:24:36.383607] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:09.626 [2024-06-07 16:24:36.383661] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:09.626 [2024-06-07 16:24:36.385492] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:09.626 [2024-06-07 16:24:36.385548] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:09.626 [2024-06-07 16:24:36.385541] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:09.626 [2024-06-07 16:24:36.385585] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:09.626 [2024-06-07 16:24:36.386952] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:09.626 [2024-06-07 16:24:36.386996] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:09.626 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.886 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.886 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.886 [2024-06-07 16:24:36.526763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.886 [2024-06-07 16:24:36.568999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.886 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.886 [2024-06-07 16:24:36.578666] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:16:09.886 [2024-06-07 16:24:36.617939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.886 [2024-06-07 16:24:36.619032] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:16:09.886 [2024-06-07 16:24:36.670258] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:16:09.886 [2024-06-07 16:24:36.677190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.886 [2024-06-07 16:24:36.727781] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:16:10.147 Running I/O for 1 seconds... 00:16:10.147 Running I/O for 1 seconds... 00:16:10.147 Running I/O for 1 seconds... 00:16:10.147 Running I/O for 1 seconds... 00:16:11.088 00:16:11.088 Latency(us) 00:16:11.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.088 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:11.088 Nvme1n1 : 1.01 13189.41 51.52 0.00 0.00 9674.59 5352.11 16493.23 00:16:11.088 =================================================================================================================== 00:16:11.088 Total : 13189.41 51.52 0.00 0.00 9674.59 5352.11 16493.23 00:16:11.088 00:16:11.088 Latency(us) 00:16:11.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.088 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:11.088 Nvme1n1 : 1.00 187255.96 731.47 0.00 0.00 680.83 271.36 761.17 00:16:11.088 =================================================================================================================== 00:16:11.088 Total : 187255.96 731.47 0.00 0.00 680.83 271.36 761.17 00:16:11.088 00:16:11.088 Latency(us) 00:16:11.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.088 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:11.088 Nvme1n1 : 1.00 18810.59 73.48 0.00 0.00 6789.07 3440.64 19005.44 00:16:11.088 =================================================================================================================== 00:16:11.088 Total : 18810.59 73.48 0.00 0.00 6789.07 3440.64 19005.44 00:16:11.088 00:16:11.088 Latency(us) 00:16:11.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.088 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:11.088 Nvme1n1 : 1.01 11944.14 46.66 0.00 0.00 10679.03 6116.69 23592.96 00:16:11.088 =================================================================================================================== 00:16:11.088 Total : 11944.14 46.66 0.00 0.00 10679.03 6116.69 23592.96 00:16:11.088 16:24:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3050525 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3050527 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3050530 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.349 rmmod nvme_tcp 00:16:11.349 rmmod nvme_fabrics 00:16:11.349 rmmod nvme_keyring 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3050394 ']' 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3050394 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 3050394 ']' 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 3050394 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3050394 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3050394' 00:16:11.349 killing process with pid 3050394 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 3050394 00:16:11.349 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 3050394 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.615 16:24:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.529 16:24:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.529 00:16:13.529 real 0m12.411s 00:16:13.529 user 0m18.248s 00:16:13.529 sys 0m6.752s 00:16:13.529 16:24:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:13.529 16:24:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:13.529 ************************************ 00:16:13.529 END TEST nvmf_bdev_io_wait 00:16:13.529 ************************************ 00:16:13.790 16:24:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:13.790 16:24:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:13.790 16:24:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:13.790 16:24:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.790 ************************************ 00:16:13.790 START TEST nvmf_queue_depth 00:16:13.790 ************************************ 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:13.791 * Looking for test storage... 00:16:13.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.791 16:24:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:21.936 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:21.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:21.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.936 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:21.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:16:21.937 00:16:21.937 --- 10.0.0.2 ping statistics --- 00:16:21.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.937 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:16:21.937 00:16:21.937 --- 10.0.0.1 ping statistics --- 00:16:21.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.937 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3055201 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3055201 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 3055201 ']' 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:21.937 16:24:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 [2024-06-07 16:24:47.892938] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:21.937 [2024-06-07 16:24:47.893001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.937 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.937 [2024-06-07 16:24:47.981019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.937 [2024-06-07 16:24:48.074442] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.937 [2024-06-07 16:24:48.074501] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.937 [2024-06-07 16:24:48.074509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.937 [2024-06-07 16:24:48.074516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.937 [2024-06-07 16:24:48.074522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.937 [2024-06-07 16:24:48.074547] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 [2024-06-07 16:24:48.721924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 Malloc0 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.937 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 [2024-06-07 16:24:48.793745] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3055244 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3055244 /var/tmp/bdevperf.sock 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 3055244 ']' 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:22.199 16:24:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 [2024-06-07 16:24:48.848276] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:22.199 [2024-06-07 16:24:48.848328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055244 ] 00:16:22.199 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.199 [2024-06-07 16:24:48.910470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.199 [2024-06-07 16:24:48.980803] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.770 16:24:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:22.770 16:24:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:22.770 16:24:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:22.770 16:24:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.770 16:24:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:23.031 NVMe0n1 00:16:23.031 16:24:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.031 16:24:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:23.031 Running I/O for 10 seconds... 00:16:33.050 00:16:33.050 Latency(us) 00:16:33.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.050 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:33.050 Verification LBA range: start 0x0 length 0x4000 00:16:33.050 NVMe0n1 : 10.04 11731.67 45.83 0.00 0.00 87012.69 6498.99 59419.31 00:16:33.050 =================================================================================================================== 00:16:33.050 Total : 11731.67 45.83 0.00 0.00 87012.69 6498.99 59419.31 00:16:33.050 0 00:16:33.050 16:24:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3055244 00:16:33.050 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 3055244 ']' 00:16:33.050 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 3055244 00:16:33.050 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:33.050 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:33.050 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3055244 00:16:33.311 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:33.311 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:33.311 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3055244' 00:16:33.311 killing process with pid 3055244 00:16:33.311 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 3055244 00:16:33.311 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.311 00:16:33.311 Latency(us) 00:16:33.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.311 =================================================================================================================== 00:16:33.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:33.311 16:24:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 3055244 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.311 rmmod nvme_tcp 00:16:33.311 rmmod nvme_fabrics 00:16:33.311 rmmod nvme_keyring 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3055201 ']' 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3055201 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 3055201 ']' 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 3055201 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:33.311 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3055201 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3055201' 00:16:33.573 killing process with pid 3055201 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 3055201 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 3055201 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.573 16:25:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.120 16:25:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:36.120 00:16:36.120 real 0m21.911s 00:16:36.120 user 0m25.383s 00:16:36.120 sys 0m6.532s 00:16:36.120 16:25:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:36.120 16:25:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:36.120 ************************************ 00:16:36.120 END TEST nvmf_queue_depth 00:16:36.120 ************************************ 00:16:36.120 16:25:02 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:36.121 16:25:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:36.121 16:25:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:36.121 16:25:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.121 ************************************ 00:16:36.121 START TEST nvmf_target_multipath 00:16:36.121 ************************************ 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:36.121 * Looking for test storage... 00:16:36.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.121 16:25:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.775 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:42.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:42.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:42.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:42.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.776 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:16:43.037 00:16:43.037 --- 10.0.0.2 ping statistics --- 00:16:43.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.037 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:16:43.037 00:16:43.037 --- 10.0.0.1 ping statistics --- 00:16:43.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.037 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:43.037 only one NIC for nvmf test 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.037 rmmod nvme_tcp 00:16:43.037 rmmod nvme_fabrics 00:16:43.037 rmmod nvme_keyring 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:43.037 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.038 16:25:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.585 00:16:45.585 real 0m9.430s 00:16:45.585 user 0m2.023s 00:16:45.585 sys 0m5.322s 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:45.585 16:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 ************************************ 00:16:45.585 END TEST nvmf_target_multipath 00:16:45.585 ************************************ 00:16:45.585 16:25:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:45.585 16:25:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:45.585 16:25:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:45.585 16:25:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 ************************************ 00:16:45.585 START TEST nvmf_zcopy 00:16:45.585 ************************************ 00:16:45.585 16:25:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:45.585 * Looking for test storage... 00:16:45.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.585 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.586 16:25:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.586 16:25:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:52.182 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:52.182 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:52.182 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:52.182 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.182 16:25:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:16:52.445 00:16:52.445 --- 10.0.0.2 ping statistics --- 00:16:52.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.445 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:16:52.445 00:16:52.445 --- 10.0.0.1 ping statistics --- 00:16:52.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.445 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3065809 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3065809 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 3065809 ']' 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:52.445 16:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:52.445 [2024-06-07 16:25:19.229740] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:52.445 [2024-06-07 16:25:19.229807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.445 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.707 [2024-06-07 16:25:19.318279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.707 [2024-06-07 16:25:19.410188] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.707 [2024-06-07 16:25:19.410250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.707 [2024-06-07 16:25:19.410258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.707 [2024-06-07 16:25:19.410267] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.707 [2024-06-07 16:25:19.410273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.707 [2024-06-07 16:25:19.410301] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 [2024-06-07 16:25:20.065970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 [2024-06-07 16:25:20.090232] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 malloc0 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.280 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.542 { 00:16:53.542 "params": { 00:16:53.542 "name": "Nvme$subsystem", 00:16:53.542 "trtype": "$TEST_TRANSPORT", 00:16:53.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.542 "adrfam": "ipv4", 00:16:53.542 "trsvcid": "$NVMF_PORT", 00:16:53.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.542 "hdgst": ${hdgst:-false}, 00:16:53.542 "ddgst": ${ddgst:-false} 00:16:53.542 }, 00:16:53.542 "method": "bdev_nvme_attach_controller" 00:16:53.542 } 00:16:53.542 EOF 00:16:53.542 )") 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:53.542 16:25:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.542 "params": { 00:16:53.542 "name": "Nvme1", 00:16:53.542 "trtype": "tcp", 00:16:53.542 "traddr": "10.0.0.2", 00:16:53.542 "adrfam": "ipv4", 00:16:53.542 "trsvcid": "4420", 00:16:53.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.542 "hdgst": false, 00:16:53.542 "ddgst": false 00:16:53.542 }, 00:16:53.542 "method": "bdev_nvme_attach_controller" 00:16:53.542 }' 00:16:53.542 [2024-06-07 16:25:20.190583] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:16:53.542 [2024-06-07 16:25:20.190646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065899 ] 00:16:53.542 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.542 [2024-06-07 16:25:20.253716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.542 [2024-06-07 16:25:20.327962] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.802 Running I/O for 10 seconds... 00:17:03.807 00:17:03.807 Latency(us) 00:17:03.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:03.807 Verification LBA range: start 0x0 length 0x1000 00:17:03.807 Nvme1n1 : 10.01 8693.30 67.92 0.00 0.00 14672.47 1774.93 29709.65 00:17:03.807 =================================================================================================================== 00:17:03.807 Total : 8693.30 67.92 0.00 0.00 14672.47 1774.93 29709.65 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3067915 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:04.069 { 00:17:04.069 "params": { 00:17:04.069 "name": "Nvme$subsystem", 00:17:04.069 "trtype": "$TEST_TRANSPORT", 00:17:04.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.069 "adrfam": "ipv4", 00:17:04.069 "trsvcid": "$NVMF_PORT", 00:17:04.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.069 "hdgst": ${hdgst:-false}, 00:17:04.069 "ddgst": ${ddgst:-false} 00:17:04.069 }, 00:17:04.069 "method": "bdev_nvme_attach_controller" 00:17:04.069 } 00:17:04.069 EOF 00:17:04.069 )") 00:17:04.069 [2024-06-07 16:25:30.682690] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.682719] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:04.069 16:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:04.069 "params": { 00:17:04.069 "name": "Nvme1", 00:17:04.069 "trtype": "tcp", 00:17:04.069 "traddr": "10.0.0.2", 00:17:04.069 "adrfam": "ipv4", 00:17:04.069 "trsvcid": "4420", 00:17:04.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.069 "hdgst": false, 00:17:04.069 "ddgst": false 00:17:04.069 }, 00:17:04.069 "method": "bdev_nvme_attach_controller" 00:17:04.069 }' 00:17:04.069 [2024-06-07 16:25:30.694688] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.694698] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.706718] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.706727] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.718749] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.718757] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.730781] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.730790] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.733590] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:17:04.069 [2024-06-07 16:25:30.733664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067915 ] 00:17:04.069 [2024-06-07 16:25:30.742814] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.742823] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.754844] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.754853] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.069 [2024-06-07 16:25:30.766875] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.766883] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.778905] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.778913] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.790937] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.790945] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.793977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.069 [2024-06-07 16:25:30.802969] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.802978] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.815001] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.815009] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.827031] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.827042] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.839061] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.839072] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.851093] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.851102] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.857913] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.069 [2024-06-07 16:25:30.863125] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.863134] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.875161] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.875173] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.887190] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.887201] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.899217] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.899226] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.069 [2024-06-07 16:25:30.911249] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.069 [2024-06-07 16:25:30.911258] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.923279] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.923288] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.935322] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.935340] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.947344] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.947356] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.959429] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.959439] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.971408] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.971418] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.983441] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.983449] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:30.995468] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:30.995476] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.007500] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.007508] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.019534] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.019543] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.031567] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.031575] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.043598] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.043607] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.055630] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.055640] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.067673] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.067683] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.079691] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.079699] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.091722] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.091730] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.103755] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.103764] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.115803] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.115819] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 Running I/O for 5 seconds... 00:17:04.330 [2024-06-07 16:25:31.127821] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.127830] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.143693] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.143710] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.156879] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.156895] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.330 [2024-06-07 16:25:31.170607] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.330 [2024-06-07 16:25:31.170626] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.183748] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.183765] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.196433] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.196448] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.208843] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.208858] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.222425] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.222440] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.235898] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.235913] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.248788] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.248804] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.591 [2024-06-07 16:25:31.261551] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.591 [2024-06-07 16:25:31.261566] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.275032] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.275047] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.287804] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.287820] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.300793] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.300808] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.313203] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.313218] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.326338] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.326353] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.339255] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.339271] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.352447] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.352463] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.365599] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.365615] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.379065] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.379081] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.391947] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.391963] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.405196] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.405210] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.418946] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.418962] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.592 [2024-06-07 16:25:31.431578] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.592 [2024-06-07 16:25:31.431592] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.445021] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.445037] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.457984] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.457999] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.471351] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.471366] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.484533] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.484548] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.497747] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.497762] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.511300] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.511315] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.524263] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.524278] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.537021] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.537036] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.550022] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.550038] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.563088] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.563103] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.576457] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.576473] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.589707] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.589722] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.602605] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.602620] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.615158] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.615173] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.628468] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.628483] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.641417] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.641432] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.653992] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.654007] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.666430] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.666445] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.678843] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.678859] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.692349] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.692364] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:04.852 [2024-06-07 16:25:31.705307] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:04.852 [2024-06-07 16:25:31.705322] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.718622] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.718637] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.731491] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.731506] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.744649] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.744665] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.757485] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.757500] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.770533] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.770548] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.783910] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.783924] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.797607] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.797622] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.810472] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.810487] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.822712] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.822727] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.836243] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.836257] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.849527] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.849541] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.862291] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.862306] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.875248] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.875262] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.887999] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.888014] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.901184] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.901200] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.914192] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.914206] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.927461] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.927476] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.113 [2024-06-07 16:25:31.940435] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.113 [2024-06-07 16:25:31.940450] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.114 [2024-06-07 16:25:31.953543] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.114 [2024-06-07 16:25:31.953558] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:31.966837] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:31.966852] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:31.979332] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:31.979347] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:31.992268] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:31.992283] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.005041] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.005056] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.018463] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.018477] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.031950] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.031966] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.045100] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.045115] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.058416] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.058431] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.070780] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.070795] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.083634] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.083649] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.096596] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.096611] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.110008] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.110022] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.122583] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.122598] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.135117] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.135133] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.148320] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.148339] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.161636] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.161651] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.174245] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.174260] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.187230] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.187245] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.200079] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.200094] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.213149] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.213163] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.375 [2024-06-07 16:25:32.226500] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.375 [2024-06-07 16:25:32.226515] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.239131] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.239147] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.252721] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.252736] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.265515] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.265530] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.278490] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.278505] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.291059] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.291074] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.303977] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.303992] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.317270] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.317284] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.330422] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.330437] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.343498] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.343513] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.356529] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.356544] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.369742] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.369757] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.382545] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.382560] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.395874] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.395894] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.408828] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.408843] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.421711] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.421726] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.434920] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.434935] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.448152] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.448167] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.636 [2024-06-07 16:25:32.460816] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.636 [2024-06-07 16:25:32.460831] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.637 [2024-06-07 16:25:32.474429] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.637 [2024-06-07 16:25:32.474445] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.637 [2024-06-07 16:25:32.487026] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.637 [2024-06-07 16:25:32.487043] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.897 [2024-06-07 16:25:32.500135] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.500149] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.513574] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.513589] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.526729] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.526743] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.540288] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.540303] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.552923] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.552939] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.566258] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.566273] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.579210] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.579226] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.591899] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.591914] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.604843] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.604859] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.617735] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.617750] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.630472] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.630487] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.642932] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.642951] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.655446] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.655461] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.669026] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.669041] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.682283] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.682299] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.695759] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.695775] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.709053] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.709068] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.721669] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.721684] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.735255] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.735271] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:05.898 [2024-06-07 16:25:32.748551] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:05.898 [2024-06-07 16:25:32.748567] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.761212] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.761227] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.774561] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.774576] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.787812] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.787828] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.801258] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.801273] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.814033] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.814047] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.827016] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.827031] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.840307] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.840322] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.853706] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.853722] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.867271] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.867287] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.880080] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.880096] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.892005] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.892024] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.905114] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.905130] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.918122] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.918137] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.931456] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.931472] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.944901] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.944916] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.958160] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.958176] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.971500] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.971515] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.984459] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.984475] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:32.997797] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:32.997812] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.159 [2024-06-07 16:25:33.011245] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.159 [2024-06-07 16:25:33.011260] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.024713] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.024728] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.038266] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.038280] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.050955] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.050970] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.064176] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.064192] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.076941] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.076956] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.090318] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.090333] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.103825] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.103840] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.117067] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.117082] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.420 [2024-06-07 16:25:33.130462] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.420 [2024-06-07 16:25:33.130476] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.143115] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.143133] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.156483] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.156497] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.169935] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.169949] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.183136] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.183150] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.196486] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.196500] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.209441] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.209456] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.222672] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.222686] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.235881] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.235896] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.248701] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.248716] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.421 [2024-06-07 16:25:33.261906] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.421 [2024-06-07 16:25:33.261921] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.274662] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.274676] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.287792] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.287806] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.300479] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.300494] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.313476] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.313491] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.326331] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.326345] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.339033] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.339047] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.352170] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.352185] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.365562] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.365576] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.377981] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.377995] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.390699] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.390714] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.403768] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.403783] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.417346] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.417361] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.429951] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.683 [2024-06-07 16:25:33.429966] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.683 [2024-06-07 16:25:33.442705] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.442719] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.455811] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.455825] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.468953] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.468967] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.482376] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.482390] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.495451] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.495465] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.508444] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.508458] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.521752] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.521767] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.684 [2024-06-07 16:25:33.534835] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.684 [2024-06-07 16:25:33.534851] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.547986] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.548002] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.561053] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.561068] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.574161] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.574176] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.587389] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.587408] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.600611] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.600626] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.613681] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.613695] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.627344] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.627359] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.640615] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.640629] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.653516] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.653531] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.666985] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.666999] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.680089] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.680103] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.692926] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.692941] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.705456] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.705470] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.718678] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.718693] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.731743] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.731757] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.744839] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.744853] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.757957] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.757972] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.771500] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.771516] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.784971] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.784985] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.798373] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.798388] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.811961] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.811976] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.983 [2024-06-07 16:25:33.825021] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.983 [2024-06-07 16:25:33.825036] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.838327] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.838342] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.850970] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.850984] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.864634] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.864650] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.877745] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.877760] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.890689] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.890704] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.903694] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.903709] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.916937] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.916952] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.930335] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.930350] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.943501] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.943515] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.956748] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.956762] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.970139] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.970153] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.983501] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.983516] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:33.996569] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:33.996584] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.009949] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.009963] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.023535] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.023549] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.035915] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.035930] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.048542] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.048557] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.061643] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.061658] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.074268] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.074282] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.244 [2024-06-07 16:25:34.087437] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.244 [2024-06-07 16:25:34.087452] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.100433] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.100448] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.113518] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.113533] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.126167] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.126182] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.139422] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.139437] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.152576] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.152592] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.165729] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.165745] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.178879] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.178894] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.192024] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.192039] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.205488] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.205503] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.218113] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.218128] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.231362] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.231377] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.243852] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.243868] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.257577] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.257592] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.270932] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.505 [2024-06-07 16:25:34.270947] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.505 [2024-06-07 16:25:34.283603] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.506 [2024-06-07 16:25:34.283618] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.506 [2024-06-07 16:25:34.296786] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.506 [2024-06-07 16:25:34.296801] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.506 [2024-06-07 16:25:34.309934] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.506 [2024-06-07 16:25:34.309949] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.506 [2024-06-07 16:25:34.323001] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.506 [2024-06-07 16:25:34.323017] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.506 [2024-06-07 16:25:34.336144] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.506 [2024-06-07 16:25:34.336160] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.506 [2024-06-07 16:25:34.349066] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.506 [2024-06-07 16:25:34.349081] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.362109] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.362125] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.375719] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.375738] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.388885] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.388901] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.402100] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.402116] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.415069] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.415084] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.428351] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.428366] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.441948] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.441963] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.455292] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.455307] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.468513] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.468528] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.481792] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.481806] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.495336] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.495352] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.508490] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.508505] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.521806] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.521820] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.534925] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.534941] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.547771] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.547786] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.560837] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.560852] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.574000] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.574015] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.587030] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.587045] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.599940] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.599955] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.767 [2024-06-07 16:25:34.612721] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.767 [2024-06-07 16:25:34.612737] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.625776] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.625799] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.638044] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.638060] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.650775] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.650791] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.664139] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.664155] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.677688] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.677704] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.690912] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.690927] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.703798] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.703813] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.716561] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.716576] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.729180] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.729194] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.742649] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.742664] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.755244] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.755258] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.768495] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.768511] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.781769] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.781785] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.795274] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.795289] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.808624] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.808639] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.822190] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.822206] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.835581] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.835596] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.849060] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.849075] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.028 [2024-06-07 16:25:34.861868] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.028 [2024-06-07 16:25:34.861883] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.029 [2024-06-07 16:25:34.874798] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.029 [2024-06-07 16:25:34.874817] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.888214] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.888230] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.901764] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.901780] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.915107] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.915122] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.928707] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.928722] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.942089] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.942104] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.955440] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.955455] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.967765] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.967780] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.981044] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.981059] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:34.994345] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:34.994360] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.007601] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.007617] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.020448] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.020463] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.033397] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.033416] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.046317] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.046331] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.058830] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.058845] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.071947] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.071962] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.084477] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.084493] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.097836] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.097850] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.110263] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.110278] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.123573] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.123593] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.290 [2024-06-07 16:25:35.137244] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.290 [2024-06-07 16:25:35.137259] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.149359] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.149374] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.162051] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.162065] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.174390] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.174409] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.187702] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.187717] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.200986] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.201001] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.214260] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.214274] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.227579] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.227594] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.551 [2024-06-07 16:25:35.240107] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.551 [2024-06-07 16:25:35.240121] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.253617] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.253631] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.266708] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.266723] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.280341] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.280356] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.293735] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.293750] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.307103] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.307117] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.320127] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.320141] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.333560] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.333574] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.346997] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.347012] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.360176] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.360190] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.373748] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.373763] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.386998] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.387013] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.552 [2024-06-07 16:25:35.400563] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.552 [2024-06-07 16:25:35.400578] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.413748] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.413764] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.426916] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.426931] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.439761] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.439776] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.453254] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.453268] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.466818] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.466832] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.479319] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.479334] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.491803] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.491818] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.504747] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.504762] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.517987] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.518002] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.530688] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.530703] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.544295] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.544311] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.557805] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.557820] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.570469] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.570484] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.583410] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.583425] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.596548] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.596562] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.609761] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.609776] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.622836] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.622850] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.635946] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.635962] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.649466] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.649481] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.813 [2024-06-07 16:25:35.663124] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.813 [2024-06-07 16:25:35.663139] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.676726] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.676741] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.689949] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.689963] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.703243] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.703258] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.715934] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.715949] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.729013] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.729027] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.741815] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.741830] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.754177] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.754192] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.767310] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.767324] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.780566] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.074 [2024-06-07 16:25:35.780581] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.074 [2024-06-07 16:25:35.794194] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.794208] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.807470] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.807485] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.820832] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.820847] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.833424] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.833440] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.846467] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.846482] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.859848] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.859864] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.873042] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.873058] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.886791] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.886806] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.899851] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.899867] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.913049] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.913064] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.075 [2024-06-07 16:25:35.926766] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.075 [2024-06-07 16:25:35.926781] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:35.939447] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:35.939462] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:35.952993] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:35.953009] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:35.966075] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:35.966090] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:35.978773] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:35.978788] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:35.992131] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:35.992146] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.005778] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.005794] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.019298] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.019314] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.032564] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.032579] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.045856] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.045872] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.059374] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.059389] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.072376] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.072391] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.085702] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.085717] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.098700] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.098715] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.112207] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.112223] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.125306] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.125321] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.137978] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.137993] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 00:17:09.336 Latency(us) 00:17:09.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.336 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:09.336 Nvme1n1 : 5.01 19516.68 152.47 0.00 0.00 6552.09 2498.56 16602.45 00:17:09.336 =================================================================================================================== 00:17:09.336 Total : 19516.68 152.47 0.00 0.00 6552.09 2498.56 16602.45 00:17:09.336 [2024-06-07 16:25:36.147195] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.147210] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.159223] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.159235] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.171258] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.171269] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.336 [2024-06-07 16:25:36.183287] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.336 [2024-06-07 16:25:36.183300] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.195317] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.195330] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.207346] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.207356] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.219374] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.219383] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.231409] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.231419] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.243439] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.243450] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.255468] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.255481] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 [2024-06-07 16:25:36.267495] subsystem.c:2039:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.597 [2024-06-07 16:25:36.267504] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3067915) - No such process 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3067915 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 delay0 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.597 16:25:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:09.597 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.597 [2024-06-07 16:25:36.406606] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:17.733 Initializing NVMe Controllers 00:17:17.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:17.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:17.733 Initialization complete. Launching workers. 00:17:17.733 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 8923 00:17:17.733 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9144, failed to submit 71 00:17:17.733 success 9034, unsuccess 110, failed 0 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.733 rmmod nvme_tcp 00:17:17.733 rmmod nvme_fabrics 00:17:17.733 rmmod nvme_keyring 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3065809 ']' 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3065809 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 3065809 ']' 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 3065809 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3065809 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3065809' 00:17:17.733 killing process with pid 3065809 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 3065809 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 3065809 00:17:17.733 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.734 16:25:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.673 16:25:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.673 00:17:18.673 real 0m33.440s 00:17:18.673 user 0m44.811s 00:17:18.673 sys 0m10.820s 00:17:18.673 16:25:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:18.673 16:25:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:18.673 ************************************ 00:17:18.673 END TEST nvmf_zcopy 00:17:18.673 ************************************ 00:17:18.673 16:25:45 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:18.673 16:25:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:18.673 16:25:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:18.673 16:25:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.673 ************************************ 00:17:18.673 START TEST nvmf_nmic 00:17:18.673 ************************************ 00:17:18.673 16:25:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:18.673 * Looking for test storage... 00:17:18.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.934 16:25:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.935 16:25:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.935 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:18.935 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:18.935 16:25:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.935 16:25:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:25.516 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:25.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:25.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:25.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.516 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:17:25.776 00:17:25.776 --- 10.0.0.2 ping statistics --- 00:17:25.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.776 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:17:25.776 00:17:25.776 --- 10.0.0.1 ping statistics --- 00:17:25.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.776 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3074567 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3074567 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 3074567 ']' 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:25.776 16:25:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:25.776 [2024-06-07 16:25:52.609135] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:17:25.776 [2024-06-07 16:25:52.609198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.036 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.036 [2024-06-07 16:25:52.680307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.036 [2024-06-07 16:25:52.756095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.036 [2024-06-07 16:25:52.756134] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.036 [2024-06-07 16:25:52.756142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.036 [2024-06-07 16:25:52.756148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.036 [2024-06-07 16:25:52.756154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.036 [2024-06-07 16:25:52.756296] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.036 [2024-06-07 16:25:52.756425] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.036 [2024-06-07 16:25:52.756530] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.036 [2024-06-07 16:25:52.756531] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.606 [2024-06-07 16:25:53.433958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.606 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 Malloc0 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 [2024-06-07 16:25:53.493390] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:26.869 test case1: single bdev can't be used in multiple subsystems 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.869 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.869 [2024-06-07 16:25:53.529357] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:26.869 [2024-06-07 16:25:53.529379] subsystem.c:2068:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:26.869 [2024-06-07 16:25:53.529386] nvmf_rpc.c:1548:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.869 request: 00:17:26.869 { 00:17:26.869 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:26.869 "namespace": { 00:17:26.869 "bdev_name": "Malloc0", 00:17:26.869 "no_auto_visible": false 00:17:26.869 }, 00:17:26.869 "method": "nvmf_subsystem_add_ns", 00:17:26.869 "req_id": 1 00:17:26.869 } 00:17:26.869 Got JSON-RPC error response 00:17:26.869 response: 00:17:26.869 { 00:17:26.869 "code": -32602, 00:17:26.869 "message": "Invalid parameters" 00:17:26.870 } 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:26.870 Adding namespace failed - expected result. 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:26.870 test case2: host connect to nvmf target in multiple paths 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:26.870 [2024-06-07 16:25:53.541482] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.870 16:25:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.252 16:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:30.164 16:25:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.164 16:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:17:30.164 16:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.164 16:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:17:30.164 16:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:17:32.159 16:25:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:32.159 [global] 00:17:32.159 thread=1 00:17:32.159 invalidate=1 00:17:32.159 rw=write 00:17:32.159 time_based=1 00:17:32.159 runtime=1 00:17:32.159 ioengine=libaio 00:17:32.159 direct=1 00:17:32.159 bs=4096 00:17:32.159 iodepth=1 00:17:32.159 norandommap=0 00:17:32.159 numjobs=1 00:17:32.159 00:17:32.159 verify_dump=1 00:17:32.159 verify_backlog=512 00:17:32.159 verify_state_save=0 00:17:32.159 do_verify=1 00:17:32.159 verify=crc32c-intel 00:17:32.159 [job0] 00:17:32.159 filename=/dev/nvme0n1 00:17:32.159 Could not set queue depth (nvme0n1) 00:17:32.421 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:32.421 fio-3.35 00:17:32.421 Starting 1 thread 00:17:33.805 00:17:33.805 job0: (groupid=0, jobs=1): err= 0: pid=3075954: Fri Jun 7 16:26:00 2024 00:17:33.805 read: IOPS=496, BW=1986KiB/s (2034kB/s)(1988KiB/1001msec) 00:17:33.805 slat (nsec): min=6308, max=64403, avg=26485.62, stdev=4052.13 00:17:33.805 clat (usec): min=947, max=1846, avg=1186.66, stdev=72.16 00:17:33.805 lat (usec): min=973, max=1881, avg=1213.15, stdev=72.38 00:17:33.805 clat percentiles (usec): 00:17:33.805 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1139], 00:17:33.805 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:17:33.805 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1254], 95.00th=[ 1270], 00:17:33.805 | 99.00th=[ 1336], 99.50th=[ 1483], 99.90th=[ 1844], 99.95th=[ 1844], 00:17:33.805 | 99.99th=[ 1844] 00:17:33.805 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:33.805 slat (nsec): min=9067, max=73001, avg=29958.32, stdev=9580.12 00:17:33.805 clat (usec): min=358, max=949, avg=729.46, stdev=90.77 00:17:33.805 lat (usec): min=369, max=981, avg=759.42, stdev=95.68 00:17:33.805 clat percentiles (usec): 00:17:33.805 | 1.00th=[ 482], 5.00th=[ 562], 10.00th=[ 611], 20.00th=[ 660], 00:17:33.805 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 734], 60.00th=[ 758], 00:17:33.805 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:17:33.805 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 947], 00:17:33.805 | 99.99th=[ 947] 00:17:33.805 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.805 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.805 lat (usec) : 500=0.69%, 750=28.15%, 1000=22.30% 00:17:33.805 lat (msec) : 2=48.86% 00:17:33.805 cpu : usr=2.20%, sys=3.80%, ctx=1009, majf=0, minf=1 00:17:33.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.805 issued rwts: total=497,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.805 00:17:33.805 Run status group 0 (all jobs): 00:17:33.805 READ: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=1988KiB (2036kB), run=1001-1001msec 00:17:33.805 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:17:33.805 00:17:33.805 Disk stats (read/write): 00:17:33.805 nvme0n1: ios=464/512, merge=0/0, ticks=510/316, in_queue=826, util=93.39% 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:33.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:33.805 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.806 rmmod nvme_tcp 00:17:33.806 rmmod nvme_fabrics 00:17:33.806 rmmod nvme_keyring 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3074567 ']' 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3074567 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 3074567 ']' 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 3074567 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3074567 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3074567' 00:17:33.806 killing process with pid 3074567 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 3074567 00:17:33.806 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 3074567 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.067 16:26:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.981 16:26:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.981 00:17:35.981 real 0m17.317s 00:17:35.981 user 0m51.066s 00:17:35.981 sys 0m6.014s 00:17:35.981 16:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:35.981 16:26:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:35.981 ************************************ 00:17:35.981 END TEST nvmf_nmic 00:17:35.981 ************************************ 00:17:35.981 16:26:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:35.981 16:26:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:35.981 16:26:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:35.981 16:26:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:36.254 ************************************ 00:17:36.254 START TEST nvmf_fio_target 00:17:36.254 ************************************ 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:36.254 * Looking for test storage... 00:17:36.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.254 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.255 16:26:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:42.843 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:42.843 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:42.843 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:42.843 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.843 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.104 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:17:43.365 00:17:43.365 --- 10.0.0.2 ping statistics --- 00:17:43.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.365 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:17:43.365 00:17:43.365 --- 10.0.0.1 ping statistics --- 00:17:43.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.365 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.365 16:26:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.365 16:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:43.365 16:26:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.365 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3080438 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3080438 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 3080438 ']' 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:43.366 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.366 [2024-06-07 16:26:10.065595] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:17:43.366 [2024-06-07 16:26:10.065657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.366 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.366 [2024-06-07 16:26:10.133880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.366 [2024-06-07 16:26:10.200054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.366 [2024-06-07 16:26:10.200092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.366 [2024-06-07 16:26:10.200099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.366 [2024-06-07 16:26:10.200106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.366 [2024-06-07 16:26:10.200111] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.366 [2024-06-07 16:26:10.200289] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.366 [2024-06-07 16:26:10.200412] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.366 [2024-06-07 16:26:10.200520] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.366 [2024-06-07 16:26:10.200521] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.308 16:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.308 [2024-06-07 16:26:11.036467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.308 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.568 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:44.568 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.829 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:44.829 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.829 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:44.829 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:45.090 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:45.090 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:45.350 16:26:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:45.350 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:45.350 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:45.611 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:45.611 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:45.871 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:45.871 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:45.871 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:46.133 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:46.133 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:46.394 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:46.394 16:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:46.394 16:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.655 [2024-06-07 16:26:13.297731] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.655 16:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:46.655 16:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:46.915 16:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.828 16:26:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:48.828 16:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:17:48.828 16:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.828 16:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:17:48.828 16:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:17:48.828 16:26:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:17:50.772 16:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:50.772 [global] 00:17:50.772 thread=1 00:17:50.772 invalidate=1 00:17:50.772 rw=write 00:17:50.772 time_based=1 00:17:50.772 runtime=1 00:17:50.772 ioengine=libaio 00:17:50.772 direct=1 00:17:50.772 bs=4096 00:17:50.772 iodepth=1 00:17:50.772 norandommap=0 00:17:50.772 numjobs=1 00:17:50.772 00:17:50.772 verify_dump=1 00:17:50.772 verify_backlog=512 00:17:50.772 verify_state_save=0 00:17:50.772 do_verify=1 00:17:50.772 verify=crc32c-intel 00:17:50.772 [job0] 00:17:50.772 filename=/dev/nvme0n1 00:17:50.772 [job1] 00:17:50.772 filename=/dev/nvme0n2 00:17:50.772 [job2] 00:17:50.772 filename=/dev/nvme0n3 00:17:50.772 [job3] 00:17:50.772 filename=/dev/nvme0n4 00:17:50.772 Could not set queue depth (nvme0n1) 00:17:50.772 Could not set queue depth (nvme0n2) 00:17:50.772 Could not set queue depth (nvme0n3) 00:17:50.772 Could not set queue depth (nvme0n4) 00:17:51.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.039 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.039 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.039 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.039 fio-3.35 00:17:51.039 Starting 4 threads 00:17:52.447 00:17:52.447 job0: (groupid=0, jobs=1): err= 0: pid=3082035: Fri Jun 7 16:26:18 2024 00:17:52.447 read: IOPS=66, BW=266KiB/s (272kB/s)(272KiB/1024msec) 00:17:52.447 slat (nsec): min=24442, max=44255, avg=26152.62, stdev=3142.68 00:17:52.447 clat (usec): min=905, max=42129, avg=9509.05, stdev=16555.29 00:17:52.447 lat (usec): min=931, max=42153, avg=9535.20, stdev=16554.83 00:17:52.447 clat percentiles (usec): 00:17:52.447 | 1.00th=[ 906], 5.00th=[ 947], 10.00th=[ 1029], 20.00th=[ 1106], 00:17:52.447 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:17:52.447 | 70.00th=[ 1254], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:17:52.447 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:52.447 | 99.99th=[42206] 00:17:52.447 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:52.447 slat (usec): min=9, max=1661, avg=36.15, stdev=76.39 00:17:52.447 clat (usec): min=251, max=1111, avg=688.24, stdev=130.56 00:17:52.447 lat (usec): min=263, max=2659, avg=724.39, stdev=161.37 00:17:52.447 clat percentiles (usec): 00:17:52.447 | 1.00th=[ 379], 5.00th=[ 490], 10.00th=[ 529], 20.00th=[ 586], 00:17:52.447 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 725], 00:17:52.447 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 857], 95.00th=[ 906], 00:17:52.447 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1106], 99.95th=[ 1106], 00:17:52.447 | 99.99th=[ 1106] 00:17:52.447 bw ( KiB/s): min= 4096, max= 4096, per=50.83%, avg=4096.00, stdev= 0.00, samples=1 00:17:52.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:52.447 lat (usec) : 500=5.17%, 750=54.31%, 1000=29.14% 00:17:52.447 lat (msec) : 2=8.97%, 50=2.41% 00:17:52.447 cpu : usr=0.49%, sys=2.05%, ctx=583, majf=0, minf=1 00:17:52.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.447 issued rwts: total=68,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.447 job1: (groupid=0, jobs=1): err= 0: pid=3082036: Fri Jun 7 16:26:18 2024 00:17:52.447 read: IOPS=302, BW=1209KiB/s (1238kB/s)(1228KiB/1016msec) 00:17:52.447 slat (nsec): min=15478, max=57214, avg=25827.21, stdev=3233.02 00:17:52.447 clat (usec): min=537, max=42114, avg=2032.91, stdev=6078.16 00:17:52.447 lat (usec): min=563, max=42139, avg=2058.74, stdev=6078.05 00:17:52.447 clat percentiles (usec): 00:17:52.447 | 1.00th=[ 832], 5.00th=[ 906], 10.00th=[ 979], 20.00th=[ 1045], 00:17:52.447 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:17:52.447 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1254], 00:17:52.447 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:52.447 | 99.99th=[42206] 00:17:52.447 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:52.447 slat (nsec): min=9658, max=64792, avg=29834.42, stdev=9053.11 00:17:52.447 clat (usec): min=355, max=1029, avg=705.76, stdev=117.69 00:17:52.447 lat (usec): min=371, max=1062, avg=735.59, stdev=121.17 00:17:52.447 clat percentiles (usec): 00:17:52.447 | 1.00th=[ 371], 5.00th=[ 494], 10.00th=[ 562], 20.00th=[ 619], 00:17:52.447 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 717], 60.00th=[ 742], 00:17:52.447 | 70.00th=[ 766], 80.00th=[ 807], 90.00th=[ 857], 95.00th=[ 898], 00:17:52.447 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1029], 99.95th=[ 1029], 00:17:52.447 | 99.99th=[ 1029] 00:17:52.447 bw ( KiB/s): min= 4096, max= 4096, per=50.83%, avg=4096.00, stdev= 0.00, samples=1 00:17:52.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:52.447 lat (usec) : 500=3.42%, 750=37.36%, 1000=26.01% 00:17:52.447 lat (msec) : 2=32.36%, 50=0.85% 00:17:52.447 cpu : usr=1.08%, sys=2.46%, ctx=820, majf=0, minf=1 00:17:52.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.447 issued rwts: total=307,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.447 job2: (groupid=0, jobs=1): err= 0: pid=3082038: Fri Jun 7 16:26:18 2024 00:17:52.447 read: IOPS=506, BW=2026KiB/s (2075kB/s)(2028KiB/1001msec) 00:17:52.447 slat (nsec): min=7626, max=57606, avg=26068.52, stdev=2987.30 00:17:52.447 clat (usec): min=849, max=1568, avg=1128.49, stdev=74.80 00:17:52.447 lat (usec): min=875, max=1593, avg=1154.56, stdev=74.69 00:17:52.447 clat percentiles (usec): 00:17:52.447 | 1.00th=[ 898], 5.00th=[ 979], 10.00th=[ 1045], 20.00th=[ 1090], 00:17:52.447 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:17:52.447 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1205], 95.00th=[ 1221], 00:17:52.447 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1565], 99.95th=[ 1565], 00:17:52.447 | 99.99th=[ 1565] 00:17:52.447 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:52.447 slat (nsec): min=10209, max=53615, avg=31435.42, stdev=9256.38 00:17:52.447 clat (usec): min=338, max=1005, avg=762.79, stdev=93.85 00:17:52.447 lat (usec): min=351, max=1055, avg=794.23, stdev=97.98 00:17:52.447 clat percentiles (usec): 00:17:52.447 | 1.00th=[ 486], 5.00th=[ 603], 10.00th=[ 635], 20.00th=[ 701], 00:17:52.447 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 799], 00:17:52.447 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 889], 00:17:52.447 | 99.00th=[ 947], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1004], 00:17:52.448 | 99.99th=[ 1004] 00:17:52.448 bw ( KiB/s): min= 4096, max= 4096, per=50.83%, avg=4096.00, stdev= 0.00, samples=1 00:17:52.448 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:52.448 lat (usec) : 500=0.88%, 750=20.80%, 1000=31.50% 00:17:52.448 lat (msec) : 2=46.81% 00:17:52.448 cpu : usr=1.80%, sys=2.80%, ctx=1020, majf=0, minf=1 00:17:52.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.448 issued rwts: total=507,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.448 job3: (groupid=0, jobs=1): err= 0: pid=3082039: Fri Jun 7 16:26:18 2024 00:17:52.448 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:52.448 slat (nsec): min=7091, max=57500, avg=26449.39, stdev=3117.44 00:17:52.448 clat (usec): min=878, max=1301, avg=1106.18, stdev=66.18 00:17:52.448 lat (usec): min=904, max=1327, avg=1132.63, stdev=66.53 00:17:52.448 clat percentiles (usec): 00:17:52.448 | 1.00th=[ 906], 5.00th=[ 988], 10.00th=[ 1020], 20.00th=[ 1057], 00:17:52.448 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:17:52.448 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:17:52.448 | 99.00th=[ 1254], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1303], 00:17:52.448 | 99.99th=[ 1303] 00:17:52.448 write: IOPS=526, BW=2106KiB/s (2156kB/s)(2108KiB/1001msec); 0 zone resets 00:17:52.448 slat (nsec): min=9368, max=57106, avg=31077.98, stdev=9383.17 00:17:52.448 clat (usec): min=406, max=1212, avg=749.75, stdev=112.10 00:17:52.448 lat (usec): min=418, max=1248, avg=780.82, stdev=115.39 00:17:52.448 clat percentiles (usec): 00:17:52.448 | 1.00th=[ 486], 5.00th=[ 570], 10.00th=[ 611], 20.00th=[ 668], 00:17:52.448 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 783], 00:17:52.448 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 914], 00:17:52.448 | 99.00th=[ 1020], 99.50th=[ 1106], 99.90th=[ 1221], 99.95th=[ 1221], 00:17:52.448 | 99.99th=[ 1221] 00:17:52.448 bw ( KiB/s): min= 4096, max= 4096, per=50.83%, avg=4096.00, stdev= 0.00, samples=1 00:17:52.448 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:52.448 lat (usec) : 500=1.06%, 750=24.35%, 1000=28.30% 00:17:52.448 lat (msec) : 2=46.29% 00:17:52.448 cpu : usr=2.40%, sys=3.80%, ctx=1042, majf=0, minf=1 00:17:52.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.448 issued rwts: total=512,527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.448 00:17:52.448 Run status group 0 (all jobs): 00:17:52.448 READ: bw=5445KiB/s (5576kB/s), 266KiB/s-2046KiB/s (272kB/s-2095kB/s), io=5576KiB (5710kB), run=1001-1024msec 00:17:52.448 WRITE: bw=8059KiB/s (8252kB/s), 2000KiB/s-2106KiB/s (2048kB/s-2156kB/s), io=8252KiB (8450kB), run=1001-1024msec 00:17:52.448 00:17:52.448 Disk stats (read/write): 00:17:52.448 nvme0n1: ios=107/512, merge=0/0, ticks=490/340, in_queue=830, util=86.77% 00:17:52.448 nvme0n2: ios=204/512, merge=0/0, ticks=1321/356, in_queue=1677, util=88.06% 00:17:52.448 nvme0n3: ios=388/512, merge=0/0, ticks=1271/367, in_queue=1638, util=92.19% 00:17:52.448 nvme0n4: ios=400/512, merge=0/0, ticks=1259/319, in_queue=1578, util=94.23% 00:17:52.448 16:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:52.448 [global] 00:17:52.448 thread=1 00:17:52.448 invalidate=1 00:17:52.448 rw=randwrite 00:17:52.448 time_based=1 00:17:52.448 runtime=1 00:17:52.448 ioengine=libaio 00:17:52.448 direct=1 00:17:52.448 bs=4096 00:17:52.448 iodepth=1 00:17:52.448 norandommap=0 00:17:52.448 numjobs=1 00:17:52.448 00:17:52.448 verify_dump=1 00:17:52.448 verify_backlog=512 00:17:52.448 verify_state_save=0 00:17:52.448 do_verify=1 00:17:52.448 verify=crc32c-intel 00:17:52.448 [job0] 00:17:52.448 filename=/dev/nvme0n1 00:17:52.448 [job1] 00:17:52.448 filename=/dev/nvme0n2 00:17:52.448 [job2] 00:17:52.448 filename=/dev/nvme0n3 00:17:52.448 [job3] 00:17:52.448 filename=/dev/nvme0n4 00:17:52.448 Could not set queue depth (nvme0n1) 00:17:52.448 Could not set queue depth (nvme0n2) 00:17:52.448 Could not set queue depth (nvme0n3) 00:17:52.448 Could not set queue depth (nvme0n4) 00:17:52.761 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.761 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.761 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.761 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.761 fio-3.35 00:17:52.761 Starting 4 threads 00:17:54.188 00:17:54.188 job0: (groupid=0, jobs=1): err= 0: pid=3082557: Fri Jun 7 16:26:20 2024 00:17:54.188 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:54.189 slat (nsec): min=6235, max=61680, avg=26165.08, stdev=3669.13 00:17:54.189 clat (usec): min=779, max=2844, avg=1165.70, stdev=149.18 00:17:54.189 lat (usec): min=805, max=2874, avg=1191.87, stdev=149.11 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 881], 5.00th=[ 963], 10.00th=[ 1037], 20.00th=[ 1106], 00:17:54.189 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:17:54.189 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:17:54.189 | 99.00th=[ 1811], 99.50th=[ 2245], 99.90th=[ 2835], 99.95th=[ 2835], 00:17:54.189 | 99.99th=[ 2835] 00:17:54.189 write: IOPS=523, BW=2094KiB/s (2144kB/s)(2096KiB/1001msec); 0 zone resets 00:17:54.189 slat (nsec): min=8495, max=51155, avg=28761.50, stdev=8912.96 00:17:54.189 clat (usec): min=358, max=1003, avg=699.47, stdev=120.93 00:17:54.189 lat (usec): min=390, max=1053, avg=728.23, stdev=123.65 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 396], 5.00th=[ 498], 10.00th=[ 537], 20.00th=[ 594], 00:17:54.189 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 725], 00:17:54.189 | 70.00th=[ 766], 80.00th=[ 807], 90.00th=[ 865], 95.00th=[ 889], 00:17:54.189 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1004], 99.95th=[ 1004], 00:17:54.189 | 99.99th=[ 1004] 00:17:54.189 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.189 lat (usec) : 500=2.61%, 750=30.50%, 1000=21.33% 00:17:54.189 lat (msec) : 2=45.27%, 4=0.29% 00:17:54.189 cpu : usr=2.00%, sys=4.00%, ctx=1036, majf=0, minf=1 00:17:54.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 issued rwts: total=512,524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.189 job1: (groupid=0, jobs=1): err= 0: pid=3082558: Fri Jun 7 16:26:20 2024 00:17:54.189 read: IOPS=245, BW=983KiB/s (1007kB/s)(984KiB/1001msec) 00:17:54.189 slat (nsec): min=9357, max=44837, avg=26590.26, stdev=2893.40 00:17:54.189 clat (usec): min=877, max=42176, avg=2542.93, stdev=7217.73 00:17:54.189 lat (usec): min=907, max=42202, avg=2569.53, stdev=7217.65 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 996], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1188], 00:17:54.189 | 30.00th=[ 1205], 40.00th=[ 1221], 50.00th=[ 1221], 60.00th=[ 1237], 00:17:54.189 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1319], 95.00th=[ 1401], 00:17:54.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:54.189 | 99.99th=[42206] 00:17:54.189 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:54.189 slat (usec): min=9, max=210, avg=32.07, stdev=11.40 00:17:54.189 clat (usec): min=325, max=1763, avg=674.27, stdev=140.70 00:17:54.189 lat (usec): min=336, max=1800, avg=706.34, stdev=142.89 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 379], 5.00th=[ 461], 10.00th=[ 502], 20.00th=[ 562], 00:17:54.189 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 709], 00:17:54.189 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 832], 95.00th=[ 898], 00:17:54.189 | 99.00th=[ 1012], 99.50th=[ 1074], 99.90th=[ 1762], 99.95th=[ 1762], 00:17:54.189 | 99.99th=[ 1762] 00:17:54.189 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.189 lat (usec) : 500=6.46%, 750=41.82%, 1000=18.73% 00:17:54.189 lat (msec) : 2=31.93%, 50=1.06% 00:17:54.189 cpu : usr=2.30%, sys=2.40%, ctx=759, majf=0, minf=1 00:17:54.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 issued rwts: total=246,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.189 job2: (groupid=0, jobs=1): err= 0: pid=3082559: Fri Jun 7 16:26:20 2024 00:17:54.189 read: IOPS=18, BW=74.3KiB/s (76.1kB/s)(76.0KiB/1023msec) 00:17:54.189 slat (nsec): min=25605, max=27155, avg=26117.42, stdev=461.09 00:17:54.189 clat (usec): min=585, max=42837, avg=33354.63, stdev=17244.35 00:17:54.189 lat (usec): min=612, max=42863, avg=33380.75, stdev=17244.05 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 586], 5.00th=[ 586], 10.00th=[ 848], 20.00th=[ 1037], 00:17:54.189 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:54.189 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:54.189 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:54.189 | 99.99th=[42730] 00:17:54.189 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:17:54.189 slat (nsec): min=8609, max=53029, avg=29588.39, stdev=8710.93 00:17:54.189 clat (usec): min=295, max=1069, avg=720.90, stdev=116.34 00:17:54.189 lat (usec): min=329, max=1101, avg=750.49, stdev=118.88 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 429], 5.00th=[ 510], 10.00th=[ 578], 20.00th=[ 635], 00:17:54.189 | 30.00th=[ 676], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 750], 00:17:54.189 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 873], 95.00th=[ 898], 00:17:54.189 | 99.00th=[ 979], 99.50th=[ 1012], 99.90th=[ 1074], 99.95th=[ 1074], 00:17:54.189 | 99.99th=[ 1074] 00:17:54.189 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.189 lat (usec) : 500=4.52%, 750=53.11%, 1000=38.79% 00:17:54.189 lat (msec) : 2=0.75%, 50=2.82% 00:17:54.189 cpu : usr=1.47%, sys=1.57%, ctx=531, majf=0, minf=1 00:17:54.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.189 job3: (groupid=0, jobs=1): err= 0: pid=3082562: Fri Jun 7 16:26:20 2024 00:17:54.189 read: IOPS=14, BW=59.4KiB/s (60.8kB/s)(60.0KiB/1010msec) 00:17:54.189 slat (nsec): min=26604, max=27865, avg=26940.40, stdev=406.47 00:17:54.189 clat (usec): min=1117, max=42076, avg=39238.12, stdev=10546.05 00:17:54.189 lat (usec): min=1144, max=42103, avg=39265.06, stdev=10546.03 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41681], 20.00th=[41681], 00:17:54.189 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:54.189 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:54.189 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:54.189 | 99.99th=[42206] 00:17:54.189 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:17:54.189 slat (nsec): min=9106, max=68213, avg=31708.37, stdev=8033.42 00:17:54.189 clat (usec): min=381, max=1209, avg=781.44, stdev=131.31 00:17:54.189 lat (usec): min=393, max=1220, avg=813.15, stdev=133.68 00:17:54.189 clat percentiles (usec): 00:17:54.189 | 1.00th=[ 416], 5.00th=[ 529], 10.00th=[ 619], 20.00th=[ 676], 00:17:54.189 | 30.00th=[ 725], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 824], 00:17:54.189 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 938], 95.00th=[ 988], 00:17:54.189 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1205], 99.95th=[ 1205], 00:17:54.189 | 99.99th=[ 1205] 00:17:54.189 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:54.189 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:54.189 lat (usec) : 500=2.47%, 750=33.40%, 1000=58.06% 00:17:54.189 lat (msec) : 2=3.42%, 50=2.66% 00:17:54.189 cpu : usr=1.19%, sys=1.98%, ctx=531, majf=0, minf=1 00:17:54.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.189 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.189 00:17:54.189 Run status group 0 (all jobs): 00:17:54.189 READ: bw=3097KiB/s (3171kB/s), 59.4KiB/s-2046KiB/s (60.8kB/s-2095kB/s), io=3168KiB (3244kB), run=1001-1023msec 00:17:54.189 WRITE: bw=8055KiB/s (8248kB/s), 2002KiB/s-2094KiB/s (2050kB/s-2144kB/s), io=8240KiB (8438kB), run=1001-1023msec 00:17:54.189 00:17:54.189 Disk stats (read/write): 00:17:54.189 nvme0n1: ios=388/512, merge=0/0, ticks=422/303, in_queue=725, util=82.46% 00:17:54.189 nvme0n2: ios=194/512, merge=0/0, ticks=655/311, in_queue=966, util=98.76% 00:17:54.189 nvme0n3: ios=13/512, merge=0/0, ticks=383/304, in_queue=687, util=86.63% 00:17:54.189 nvme0n4: ios=30/512, merge=0/0, ticks=1218/326, in_queue=1544, util=100.00% 00:17:54.189 16:26:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:54.189 [global] 00:17:54.189 thread=1 00:17:54.189 invalidate=1 00:17:54.189 rw=write 00:17:54.189 time_based=1 00:17:54.189 runtime=1 00:17:54.189 ioengine=libaio 00:17:54.189 direct=1 00:17:54.189 bs=4096 00:17:54.189 iodepth=128 00:17:54.189 norandommap=0 00:17:54.189 numjobs=1 00:17:54.189 00:17:54.189 verify_dump=1 00:17:54.189 verify_backlog=512 00:17:54.189 verify_state_save=0 00:17:54.189 do_verify=1 00:17:54.189 verify=crc32c-intel 00:17:54.189 [job0] 00:17:54.189 filename=/dev/nvme0n1 00:17:54.189 [job1] 00:17:54.190 filename=/dev/nvme0n2 00:17:54.190 [job2] 00:17:54.190 filename=/dev/nvme0n3 00:17:54.190 [job3] 00:17:54.190 filename=/dev/nvme0n4 00:17:54.190 Could not set queue depth (nvme0n1) 00:17:54.190 Could not set queue depth (nvme0n2) 00:17:54.190 Could not set queue depth (nvme0n3) 00:17:54.190 Could not set queue depth (nvme0n4) 00:17:54.450 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.450 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.450 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.450 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.450 fio-3.35 00:17:54.450 Starting 4 threads 00:17:55.855 00:17:55.855 job0: (groupid=0, jobs=1): err= 0: pid=3083086: Fri Jun 7 16:26:22 2024 00:17:55.855 read: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1002msec) 00:17:55.855 slat (nsec): min=906, max=23515k, avg=159973.23, stdev=1086768.13 00:17:55.855 clat (usec): min=1243, max=73034, avg=20249.38, stdev=11064.69 00:17:55.855 lat (usec): min=4559, max=73057, avg=20409.35, stdev=11154.88 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 6456], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11731], 00:17:55.855 | 30.00th=[13304], 40.00th=[15270], 50.00th=[16057], 60.00th=[18220], 00:17:55.855 | 70.00th=[21890], 80.00th=[27919], 90.00th=[36439], 95.00th=[39060], 00:17:55.855 | 99.00th=[62129], 99.50th=[62129], 99.90th=[66323], 99.95th=[66323], 00:17:55.855 | 99.99th=[72877] 00:17:55.855 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:17:55.855 slat (nsec): min=1646, max=20980k, avg=116279.70, stdev=831573.63 00:17:55.855 clat (usec): min=5795, max=50229, avg=15411.02, stdev=7606.96 00:17:55.855 lat (usec): min=5803, max=50259, avg=15527.30, stdev=7669.05 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[10421], 00:17:55.855 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12518], 60.00th=[14353], 00:17:55.855 | 70.00th=[15795], 80.00th=[18220], 90.00th=[27395], 95.00th=[34341], 00:17:55.855 | 99.00th=[40109], 99.50th=[40109], 99.90th=[42206], 99.95th=[48497], 00:17:55.855 | 99.99th=[50070] 00:17:55.855 bw ( KiB/s): min=13736, max=14906, per=16.70%, avg=14321.00, stdev=827.31, samples=2 00:17:55.855 iops : min= 3434, max= 3726, avg=3580.00, stdev=206.48, samples=2 00:17:55.855 lat (msec) : 2=0.01%, 10=10.10%, 20=64.43%, 50=24.08%, 100=1.38% 00:17:55.855 cpu : usr=3.20%, sys=3.30%, ctx=279, majf=0, minf=1 00:17:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.855 issued rwts: total=3518,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.855 job1: (groupid=0, jobs=1): err= 0: pid=3083090: Fri Jun 7 16:26:22 2024 00:17:55.855 read: IOPS=5272, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1003msec) 00:17:55.855 slat (nsec): min=907, max=8523.5k, avg=90352.60, stdev=476478.96 00:17:55.855 clat (usec): min=1423, max=26906, avg=11270.17, stdev=2826.54 00:17:55.855 lat (usec): min=6312, max=26910, avg=11360.52, stdev=2847.81 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 7111], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9503], 00:17:55.855 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:17:55.855 | 70.00th=[11076], 80.00th=[11994], 90.00th=[16057], 95.00th=[17171], 00:17:55.855 | 99.00th=[20317], 99.50th=[22414], 99.90th=[25822], 99.95th=[25822], 00:17:55.855 | 99.99th=[26870] 00:17:55.855 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:17:55.855 slat (nsec): min=1580, max=9074.5k, avg=89251.78, stdev=445756.97 00:17:55.855 clat (usec): min=5251, max=48153, avg=11889.44, stdev=7039.81 00:17:55.855 lat (usec): min=5515, max=48171, avg=11978.69, stdev=7080.94 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7701], 00:17:55.855 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9372], 00:17:55.855 | 70.00th=[10552], 80.00th=[15795], 90.00th=[20317], 95.00th=[25822], 00:17:55.855 | 99.00th=[46400], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:17:55.855 | 99.99th=[47973] 00:17:55.855 bw ( KiB/s): min=16384, max=28672, per=26.27%, avg=22528.00, stdev=8688.93, samples=2 00:17:55.855 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:17:55.855 lat (msec) : 2=0.01%, 10=52.89%, 20=41.00%, 50=6.10% 00:17:55.855 cpu : usr=3.89%, sys=3.19%, ctx=535, majf=0, minf=1 00:17:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.855 issued rwts: total=5288,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.855 job2: (groupid=0, jobs=1): err= 0: pid=3083091: Fri Jun 7 16:26:22 2024 00:17:55.855 read: IOPS=5424, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1003msec) 00:17:55.855 slat (nsec): min=907, max=17968k, avg=95361.80, stdev=675454.01 00:17:55.855 clat (usec): min=1219, max=37918, avg=12204.73, stdev=3768.20 00:17:55.855 lat (usec): min=1746, max=37944, avg=12300.09, stdev=3797.89 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 6128], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[ 9896], 00:17:55.855 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11076], 60.00th=[11469], 00:17:55.855 | 70.00th=[12518], 80.00th=[14484], 90.00th=[16712], 95.00th=[20579], 00:17:55.855 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:17:55.855 | 99.99th=[38011] 00:17:55.855 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:17:55.855 slat (nsec): min=1614, max=11329k, avg=77492.93, stdev=513593.79 00:17:55.855 clat (usec): min=1217, max=33343, avg=10754.02, stdev=4151.60 00:17:55.855 lat (usec): min=1226, max=33366, avg=10831.51, stdev=4172.99 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 2245], 5.00th=[ 5276], 10.00th=[ 6128], 20.00th=[ 8586], 00:17:55.855 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:17:55.855 | 70.00th=[10945], 80.00th=[13042], 90.00th=[16319], 95.00th=[19792], 00:17:55.855 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26608], 99.95th=[26608], 00:17:55.855 | 99.99th=[33424] 00:17:55.855 bw ( KiB/s): min=20592, max=24464, per=26.27%, avg=22528.00, stdev=2737.92, samples=2 00:17:55.855 iops : min= 5148, max= 6116, avg=5632.00, stdev=684.48, samples=2 00:17:55.855 lat (msec) : 2=0.50%, 4=0.59%, 10=37.93%, 20=55.79%, 50=5.19% 00:17:55.855 cpu : usr=3.39%, sys=4.99%, ctx=404, majf=0, minf=1 00:17:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.855 issued rwts: total=5441,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.855 job3: (groupid=0, jobs=1): err= 0: pid=3083092: Fri Jun 7 16:26:22 2024 00:17:55.855 read: IOPS=6547, BW=25.6MiB/s (26.8MB/s)(25.7MiB/1003msec) 00:17:55.855 slat (nsec): min=933, max=12789k, avg=69932.45, stdev=532898.12 00:17:55.855 clat (usec): min=1009, max=54391, avg=10834.96, stdev=4482.81 00:17:55.855 lat (usec): min=1708, max=54393, avg=10904.89, stdev=4495.66 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 3720], 5.00th=[ 5866], 10.00th=[ 6980], 20.00th=[ 8029], 00:17:55.855 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:17:55.855 | 70.00th=[11863], 80.00th=[13173], 90.00th=[14877], 95.00th=[17957], 00:17:55.855 | 99.00th=[26608], 99.50th=[31327], 99.90th=[51119], 99.95th=[51643], 00:17:55.855 | 99.99th=[54264] 00:17:55.855 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:17:55.855 slat (nsec): min=1604, max=12306k, avg=56162.15, stdev=449680.05 00:17:55.855 clat (usec): min=675, max=28142, avg=8432.55, stdev=4308.59 00:17:55.855 lat (usec): min=708, max=28151, avg=8488.71, stdev=4330.15 00:17:55.855 clat percentiles (usec): 00:17:55.855 | 1.00th=[ 1237], 5.00th=[ 2147], 10.00th=[ 3261], 20.00th=[ 5145], 00:17:55.855 | 30.00th=[ 6325], 40.00th=[ 7242], 50.00th=[ 8029], 60.00th=[ 8717], 00:17:55.855 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[13435], 95.00th=[16057], 00:17:55.855 | 99.00th=[24249], 99.50th=[26084], 99.90th=[27657], 99.95th=[28181], 00:17:55.855 | 99.99th=[28181] 00:17:55.855 bw ( KiB/s): min=24576, max=28672, per=31.05%, avg=26624.00, stdev=2896.31, samples=2 00:17:55.855 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:17:55.855 lat (usec) : 750=0.02%, 1000=0.04% 00:17:55.855 lat (msec) : 2=2.60%, 4=4.67%, 10=53.50%, 20=36.53%, 50=2.50% 00:17:55.855 lat (msec) : 100=0.14% 00:17:55.855 cpu : usr=4.89%, sys=7.09%, ctx=517, majf=0, minf=1 00:17:55.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.855 issued rwts: total=6567,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.855 00:17:55.855 Run status group 0 (all jobs): 00:17:55.855 READ: bw=81.1MiB/s (85.0MB/s), 13.7MiB/s-25.6MiB/s (14.4MB/s-26.8MB/s), io=81.3MiB (85.3MB), run=1002-1003msec 00:17:55.855 WRITE: bw=83.7MiB/s (87.8MB/s), 14.0MiB/s-25.9MiB/s (14.7MB/s-27.2MB/s), io=84.0MiB (88.1MB), run=1002-1003msec 00:17:55.855 00:17:55.855 Disk stats (read/write): 00:17:55.855 nvme0n1: ios=2610/3072, merge=0/0, ticks=23316/18884, in_queue=42200, util=96.49% 00:17:55.855 nvme0n2: ios=4646/5106, merge=0/0, ticks=13739/15383, in_queue=29122, util=100.00% 00:17:55.855 nvme0n3: ios=4627/4704, merge=0/0, ticks=35970/29968, in_queue=65938, util=98.21% 00:17:55.855 nvme0n4: ios=5435/5632, merge=0/0, ticks=53034/41235, in_queue=94269, util=97.33% 00:17:55.856 16:26:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:55.856 [global] 00:17:55.856 thread=1 00:17:55.856 invalidate=1 00:17:55.856 rw=randwrite 00:17:55.856 time_based=1 00:17:55.856 runtime=1 00:17:55.856 ioengine=libaio 00:17:55.856 direct=1 00:17:55.856 bs=4096 00:17:55.856 iodepth=128 00:17:55.856 norandommap=0 00:17:55.856 numjobs=1 00:17:55.856 00:17:55.856 verify_dump=1 00:17:55.856 verify_backlog=512 00:17:55.856 verify_state_save=0 00:17:55.856 do_verify=1 00:17:55.856 verify=crc32c-intel 00:17:55.856 [job0] 00:17:55.856 filename=/dev/nvme0n1 00:17:55.856 [job1] 00:17:55.856 filename=/dev/nvme0n2 00:17:55.856 [job2] 00:17:55.856 filename=/dev/nvme0n3 00:17:55.856 [job3] 00:17:55.856 filename=/dev/nvme0n4 00:17:55.856 Could not set queue depth (nvme0n1) 00:17:55.856 Could not set queue depth (nvme0n2) 00:17:55.856 Could not set queue depth (nvme0n3) 00:17:55.856 Could not set queue depth (nvme0n4) 00:17:56.115 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.115 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.115 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.115 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:56.115 fio-3.35 00:17:56.115 Starting 4 threads 00:17:57.509 00:17:57.509 job0: (groupid=0, jobs=1): err= 0: pid=3083610: Fri Jun 7 16:26:23 2024 00:17:57.509 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:17:57.509 slat (nsec): min=849, max=4275.2k, avg=63266.84, stdev=401545.24 00:17:57.509 clat (usec): min=4722, max=12832, avg=7994.40, stdev=958.63 00:17:57.509 lat (usec): min=4724, max=12847, avg=8057.67, stdev=1020.61 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 5604], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:17:57.509 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8291], 00:17:57.509 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9372], 00:17:57.509 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12256], 99.95th=[12387], 00:17:57.509 | 99.99th=[12780] 00:17:57.509 write: IOPS=8349, BW=32.6MiB/s (34.2MB/s)(32.7MiB/1003msec); 0 zone resets 00:17:57.509 slat (nsec): min=1447, max=4147.6k, avg=54177.17, stdev=264434.36 00:17:57.509 clat (usec): min=2489, max=13013, avg=7356.03, stdev=1003.04 00:17:57.509 lat (usec): min=3101, max=13031, avg=7410.20, stdev=1023.08 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6652], 00:17:57.509 | 30.00th=[ 6915], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7570], 00:17:57.509 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 8848], 00:17:57.509 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11207], 99.95th=[11994], 00:17:57.509 | 99.99th=[13042] 00:17:57.509 bw ( KiB/s): min=31320, max=34664, per=30.92%, avg=32992.00, stdev=2364.57, samples=2 00:17:57.509 iops : min= 7830, max= 8666, avg=8248.00, stdev=591.14, samples=2 00:17:57.509 lat (msec) : 4=0.12%, 10=97.49%, 20=2.39% 00:17:57.509 cpu : usr=4.69%, sys=4.79%, ctx=1020, majf=0, minf=1 00:17:57.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:57.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.509 issued rwts: total=8192,8375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.509 job1: (groupid=0, jobs=1): err= 0: pid=3083611: Fri Jun 7 16:26:23 2024 00:17:57.509 read: IOPS=8401, BW=32.8MiB/s (34.4MB/s)(33.0MiB/1005msec) 00:17:57.509 slat (nsec): min=868, max=5805.1k, avg=60960.09, stdev=393719.78 00:17:57.509 clat (usec): min=1224, max=14800, avg=7755.19, stdev=1378.22 00:17:57.509 lat (usec): min=3519, max=16133, avg=7816.15, stdev=1409.90 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6718], 00:17:57.509 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8094], 00:17:57.509 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10028], 00:17:57.509 | 99.00th=[11994], 99.50th=[12387], 99.90th=[14746], 99.95th=[14746], 00:17:57.509 | 99.99th=[14746] 00:17:57.509 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:17:57.509 slat (nsec): min=1468, max=6545.3k, avg=50516.37, stdev=284958.17 00:17:57.509 clat (usec): min=988, max=14797, avg=7120.86, stdev=1491.86 00:17:57.509 lat (usec): min=999, max=14805, avg=7171.37, stdev=1506.23 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 2638], 5.00th=[ 3949], 10.00th=[ 5342], 20.00th=[ 6259], 00:17:57.509 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7570], 00:17:57.509 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 8979], 00:17:57.509 | 99.00th=[10814], 99.50th=[12125], 99.90th=[12911], 99.95th=[12911], 00:17:57.509 | 99.99th=[14746] 00:17:57.509 bw ( KiB/s): min=32768, max=36864, per=32.63%, avg=34816.00, stdev=2896.31, samples=2 00:17:57.509 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:17:57.509 lat (usec) : 1000=0.02% 00:17:57.509 lat (msec) : 2=0.16%, 4=2.60%, 10=93.51%, 20=3.71% 00:17:57.509 cpu : usr=5.18%, sys=6.77%, ctx=861, majf=0, minf=1 00:17:57.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:57.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.509 issued rwts: total=8444,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.509 job2: (groupid=0, jobs=1): err= 0: pid=3083613: Fri Jun 7 16:26:23 2024 00:17:57.509 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1003msec) 00:17:57.509 slat (nsec): min=948, max=12808k, avg=99526.80, stdev=712291.41 00:17:57.509 clat (msec): min=2, max=108, avg=12.20, stdev=11.77 00:17:57.509 lat (msec): min=3, max=109, avg=12.30, stdev=11.88 00:17:57.509 clat percentiles (msec): 00:17:57.509 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 8], 00:17:57.509 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:17:57.509 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 16], 95.00th=[ 22], 00:17:57.509 | 99.00th=[ 84], 99.50th=[ 99], 99.90th=[ 109], 99.95th=[ 109], 00:17:57.509 | 99.99th=[ 109] 00:17:57.509 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:57.509 slat (nsec): min=1526, max=10699k, avg=115486.18, stdev=740045.33 00:17:57.509 clat (usec): min=841, max=111131, avg=16896.20, stdev=24970.62 00:17:57.509 lat (usec): min=849, max=111138, avg=17011.69, stdev=25103.25 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 1614], 5.00th=[ 3195], 10.00th=[ 4621], 20.00th=[ 6194], 00:17:57.509 | 30.00th=[ 6915], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8848], 00:17:57.509 | 70.00th=[ 9110], 80.00th=[ 11731], 90.00th=[ 52691], 95.00th=[ 92799], 00:17:57.509 | 99.00th=[104334], 99.50th=[109577], 99.90th=[110625], 99.95th=[110625], 00:17:57.509 | 99.99th=[110625] 00:17:57.509 bw ( KiB/s): min=11096, max=25768, per=17.28%, avg=18432.00, stdev=10374.67, samples=2 00:17:57.509 iops : min= 2774, max= 6442, avg=4608.00, stdev=2593.67, samples=2 00:17:57.509 lat (usec) : 1000=0.03% 00:17:57.509 lat (msec) : 2=0.96%, 4=4.27%, 10=57.69%, 20=25.49%, 50=4.96% 00:17:57.509 lat (msec) : 100=5.21%, 250=1.39% 00:17:57.509 cpu : usr=3.39%, sys=4.39%, ctx=401, majf=0, minf=1 00:17:57.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:57.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.509 issued rwts: total=4048,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.509 job3: (groupid=0, jobs=1): err= 0: pid=3083614: Fri Jun 7 16:26:23 2024 00:17:57.509 read: IOPS=4710, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:17:57.509 slat (nsec): min=946, max=21179k, avg=108626.39, stdev=827878.27 00:17:57.509 clat (usec): min=1738, max=37186, avg=13658.98, stdev=3485.75 00:17:57.509 lat (usec): min=5452, max=37193, avg=13767.61, stdev=3548.35 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 7963], 5.00th=[10552], 10.00th=[10945], 20.00th=[11600], 00:17:57.509 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:17:57.509 | 70.00th=[14222], 80.00th=[16188], 90.00th=[19006], 95.00th=[21103], 00:17:57.509 | 99.00th=[22938], 99.50th=[22938], 99.90th=[36963], 99.95th=[36963], 00:17:57.509 | 99.99th=[36963] 00:17:57.509 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:17:57.509 slat (nsec): min=1551, max=9034.4k, avg=90374.72, stdev=558418.68 00:17:57.509 clat (usec): min=1133, max=58034, avg=12270.99, stdev=7061.54 00:17:57.509 lat (usec): min=1144, max=58043, avg=12361.36, stdev=7104.60 00:17:57.509 clat percentiles (usec): 00:17:57.509 | 1.00th=[ 4228], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 8160], 00:17:57.509 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11863], 60.00th=[12256], 00:17:57.509 | 70.00th=[12387], 80.00th=[12649], 90.00th=[14615], 95.00th=[20841], 00:17:57.509 | 99.00th=[50070], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:17:57.509 | 99.99th=[57934] 00:17:57.509 bw ( KiB/s): min=20032, max=20912, per=19.19%, avg=20472.00, stdev=622.25, samples=2 00:17:57.509 iops : min= 5008, max= 5228, avg=5118.00, stdev=155.56, samples=2 00:17:57.509 lat (msec) : 2=0.03%, 4=0.41%, 10=16.96%, 20=76.09%, 50=5.97% 00:17:57.509 lat (msec) : 100=0.55% 00:17:57.509 cpu : usr=3.29%, sys=5.38%, ctx=448, majf=0, minf=1 00:17:57.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:57.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.509 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.509 00:17:57.509 Run status group 0 (all jobs): 00:17:57.509 READ: bw=98.8MiB/s (104MB/s), 15.8MiB/s-32.8MiB/s (16.5MB/s-34.4MB/s), io=99.3MiB (104MB), run=1003-1005msec 00:17:57.509 WRITE: bw=104MiB/s (109MB/s), 17.9MiB/s-33.8MiB/s (18.8MB/s-35.5MB/s), io=105MiB (110MB), run=1003-1005msec 00:17:57.509 00:17:57.509 Disk stats (read/write): 00:17:57.509 nvme0n1: ios=6894/7168, merge=0/0, ticks=26749/24792, in_queue=51541, util=87.58% 00:17:57.509 nvme0n2: ios=7211/7351, merge=0/0, ticks=34669/30885, in_queue=65554, util=96.74% 00:17:57.509 nvme0n3: ios=2735/3584, merge=0/0, ticks=34935/67567, in_queue=102502, util=91.99% 00:17:57.509 nvme0n4: ios=3961/4096, merge=0/0, ticks=52225/49705, in_queue=101930, util=89.45% 00:17:57.509 16:26:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:57.509 16:26:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3083945 00:17:57.510 16:26:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:57.510 16:26:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:57.510 [global] 00:17:57.510 thread=1 00:17:57.510 invalidate=1 00:17:57.510 rw=read 00:17:57.510 time_based=1 00:17:57.510 runtime=10 00:17:57.510 ioengine=libaio 00:17:57.510 direct=1 00:17:57.510 bs=4096 00:17:57.510 iodepth=1 00:17:57.510 norandommap=1 00:17:57.510 numjobs=1 00:17:57.510 00:17:57.510 [job0] 00:17:57.510 filename=/dev/nvme0n1 00:17:57.510 [job1] 00:17:57.510 filename=/dev/nvme0n2 00:17:57.510 [job2] 00:17:57.510 filename=/dev/nvme0n3 00:17:57.510 [job3] 00:17:57.510 filename=/dev/nvme0n4 00:17:57.510 Could not set queue depth (nvme0n1) 00:17:57.510 Could not set queue depth (nvme0n2) 00:17:57.510 Could not set queue depth (nvme0n3) 00:17:57.510 Could not set queue depth (nvme0n4) 00:17:57.769 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.769 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.769 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.769 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.769 fio-3.35 00:17:57.769 Starting 4 threads 00:18:00.314 16:26:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:00.314 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:00.314 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9338880, buflen=4096 00:18:00.314 fio: pid=3084137, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.575 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:00.575 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:00.575 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=274432, buflen=4096 00:18:00.575 fio: pid=3084136, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.836 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=290816, buflen=4096 00:18:00.836 fio: pid=3084133, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.836 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:00.836 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:00.836 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10436608, buflen=4096 00:18:00.836 fio: pid=3084135, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.836 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:00.836 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:01.097 00:18:01.097 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3084133: Fri Jun 7 16:26:27 2024 00:18:01.097 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(284KiB/2938msec) 00:18:01.097 slat (usec): min=7, max=14519, avg=227.11, stdev=1708.11 00:18:01.097 clat (usec): min=1057, max=46913, avg=40836.79, stdev=6842.97 00:18:01.097 lat (usec): min=1089, max=56048, avg=41066.73, stdev=7075.62 00:18:01.097 clat percentiles (usec): 00:18:01.097 | 1.00th=[ 1057], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:01.097 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:18:01.097 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:01.097 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:18:01.097 | 99.99th=[46924] 00:18:01.097 bw ( KiB/s): min= 96, max= 104, per=1.52%, avg=97.60, stdev= 3.58, samples=5 00:18:01.097 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:18:01.097 lat (msec) : 2=2.78%, 50=95.83% 00:18:01.097 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:18:01.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.097 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.097 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.097 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3084135: Fri Jun 7 16:26:27 2024 00:18:01.097 read: IOPS=821, BW=3285KiB/s (3363kB/s)(9.95MiB/3103msec) 00:18:01.097 slat (usec): min=6, max=24398, avg=52.42, stdev=614.40 00:18:01.097 clat (usec): min=585, max=2409, avg=1150.47, stdev=105.22 00:18:01.097 lat (usec): min=611, max=25508, avg=1202.90, stdev=623.88 00:18:01.097 clat percentiles (usec): 00:18:01.097 | 1.00th=[ 865], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1074], 00:18:01.097 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1188], 00:18:01.097 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:18:01.097 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1549], 99.95th=[ 1762], 00:18:01.097 | 99.99th=[ 2409] 00:18:01.097 bw ( KiB/s): min= 2994, max= 3544, per=51.74%, avg=3312.33, stdev=194.52, samples=6 00:18:01.097 iops : min= 748, max= 886, avg=828.00, stdev=48.79, samples=6 00:18:01.097 lat (usec) : 750=0.20%, 1000=8.63% 00:18:01.097 lat (msec) : 2=91.09%, 4=0.04% 00:18:01.097 cpu : usr=1.32%, sys=3.42%, ctx=2555, majf=0, minf=1 00:18:01.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.097 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.097 issued rwts: total=2549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.097 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3084136: Fri Jun 7 16:26:27 2024 00:18:01.097 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(268KiB/2799msec) 00:18:01.097 slat (usec): min=14, max=5597, avg=107.78, stdev=675.72 00:18:01.097 clat (usec): min=1132, max=43043, avg=41336.89, stdev=4991.93 00:18:01.097 lat (usec): min=1167, max=46925, avg=41445.90, stdev=5036.87 00:18:01.097 clat percentiles (usec): 00:18:01.097 | 1.00th=[ 1139], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:01.097 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:01.097 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:01.097 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:01.097 | 99.99th=[43254] 00:18:01.097 bw ( KiB/s): min= 96, max= 96, per=1.50%, avg=96.00, stdev= 0.00, samples=5 00:18:01.097 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:18:01.097 lat (msec) : 2=1.47%, 50=97.06% 00:18:01.097 cpu : usr=0.14%, sys=0.00%, ctx=69, majf=0, minf=1 00:18:01.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.098 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.098 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.098 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3084137: Fri Jun 7 16:26:27 2024 00:18:01.098 read: IOPS=874, BW=3497KiB/s (3581kB/s)(9120KiB/2608msec) 00:18:01.098 slat (nsec): min=7244, max=60050, avg=24866.47, stdev=3323.72 00:18:01.098 clat (usec): min=519, max=42013, avg=1103.14, stdev=1483.49 00:18:01.098 lat (usec): min=544, max=42037, avg=1128.01, stdev=1483.48 00:18:01.098 clat percentiles (usec): 00:18:01.098 | 1.00th=[ 709], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 955], 00:18:01.098 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1106], 00:18:01.098 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:18:01.098 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[41681], 99.95th=[42206], 00:18:01.098 | 99.99th=[42206] 00:18:01.098 bw ( KiB/s): min= 3208, max= 3800, per=56.92%, avg=3644.80, stdev=245.97, samples=5 00:18:01.098 iops : min= 802, max= 950, avg=911.20, stdev=61.49, samples=5 00:18:01.098 lat (usec) : 750=1.93%, 1000=27.71% 00:18:01.098 lat (msec) : 2=70.19%, 50=0.13% 00:18:01.098 cpu : usr=0.84%, sys=2.69%, ctx=2281, majf=0, minf=2 00:18:01.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.098 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.098 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.098 00:18:01.098 Run status group 0 (all jobs): 00:18:01.098 READ: bw=6402KiB/s (6555kB/s), 95.7KiB/s-3497KiB/s (98.0kB/s-3581kB/s), io=19.4MiB (20.3MB), run=2608-3103msec 00:18:01.098 00:18:01.098 Disk stats (read/write): 00:18:01.098 nvme0n1: ios=69/0, merge=0/0, ticks=2812/0, in_queue=2812, util=94.36% 00:18:01.098 nvme0n2: ios=2548/0, merge=0/0, ticks=2654/0, in_queue=2654, util=93.74% 00:18:01.098 nvme0n3: ios=62/0, merge=0/0, ticks=2561/0, in_queue=2561, util=96.03% 00:18:01.098 nvme0n4: ios=2280/0, merge=0/0, ticks=2412/0, in_queue=2412, util=96.42% 00:18:01.098 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.098 16:26:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:01.358 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.358 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:01.358 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.358 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:01.619 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.619 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3083945 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:01.880 nvmf hotplug test: fio failed as expected 00:18:01.880 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.141 rmmod nvme_tcp 00:18:02.141 rmmod nvme_fabrics 00:18:02.141 rmmod nvme_keyring 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3080438 ']' 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3080438 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 3080438 ']' 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 3080438 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3080438 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3080438' 00:18:02.141 killing process with pid 3080438 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 3080438 00:18:02.141 16:26:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 3080438 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.401 16:26:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.312 16:26:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:04.312 00:18:04.312 real 0m28.295s 00:18:04.312 user 2m36.738s 00:18:04.312 sys 0m9.022s 00:18:04.312 16:26:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:04.312 16:26:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.312 ************************************ 00:18:04.312 END TEST nvmf_fio_target 00:18:04.312 ************************************ 00:18:04.573 16:26:31 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:04.573 16:26:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:04.573 16:26:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:04.573 16:26:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:04.573 ************************************ 00:18:04.573 START TEST nvmf_bdevio 00:18:04.573 ************************************ 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:04.574 * Looking for test storage... 00:18:04.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:04.574 16:26:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.161 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:11.422 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:11.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:11.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:11.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.422 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.423 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.683 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:18:11.684 00:18:11.684 --- 10.0.0.2 ping statistics --- 00:18:11.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.684 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:18:11.684 00:18:11.684 --- 10.0.0.1 ping statistics --- 00:18:11.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.684 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3089156 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3089156 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 3089156 ']' 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:11.684 16:26:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:11.684 [2024-06-07 16:26:38.394216] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:18:11.684 [2024-06-07 16:26:38.394265] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.684 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.684 [2024-06-07 16:26:38.477133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.945 [2024-06-07 16:26:38.541523] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.945 [2024-06-07 16:26:38.541559] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.945 [2024-06-07 16:26:38.541567] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.945 [2024-06-07 16:26:38.541573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.945 [2024-06-07 16:26:38.541580] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.945 [2024-06-07 16:26:38.541721] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:18:11.945 [2024-06-07 16:26:38.541856] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:18:11.945 [2024-06-07 16:26:38.542007] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.945 [2024-06-07 16:26:38.542008] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 [2024-06-07 16:26:39.231191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 Malloc0 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:12.517 [2024-06-07 16:26:39.296836] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:12.517 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:12.517 { 00:18:12.517 "params": { 00:18:12.517 "name": "Nvme$subsystem", 00:18:12.517 "trtype": "$TEST_TRANSPORT", 00:18:12.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.517 "adrfam": "ipv4", 00:18:12.517 "trsvcid": "$NVMF_PORT", 00:18:12.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.517 "hdgst": ${hdgst:-false}, 00:18:12.517 "ddgst": ${ddgst:-false} 00:18:12.517 }, 00:18:12.517 "method": "bdev_nvme_attach_controller" 00:18:12.517 } 00:18:12.518 EOF 00:18:12.518 )") 00:18:12.518 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:12.518 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:12.518 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:12.518 16:26:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:12.518 "params": { 00:18:12.518 "name": "Nvme1", 00:18:12.518 "trtype": "tcp", 00:18:12.518 "traddr": "10.0.0.2", 00:18:12.518 "adrfam": "ipv4", 00:18:12.518 "trsvcid": "4420", 00:18:12.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.518 "hdgst": false, 00:18:12.518 "ddgst": false 00:18:12.518 }, 00:18:12.518 "method": "bdev_nvme_attach_controller" 00:18:12.518 }' 00:18:12.518 [2024-06-07 16:26:39.361078] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:18:12.518 [2024-06-07 16:26:39.361160] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089389 ] 00:18:12.778 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.778 [2024-06-07 16:26:39.429278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:12.778 [2024-06-07 16:26:39.505730] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.778 [2024-06-07 16:26:39.505848] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.778 [2024-06-07 16:26:39.505851] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.039 I/O targets: 00:18:13.039 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:13.039 00:18:13.039 00:18:13.039 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.039 http://cunit.sourceforge.net/ 00:18:13.039 00:18:13.039 00:18:13.039 Suite: bdevio tests on: Nvme1n1 00:18:13.039 Test: blockdev write read block ...passed 00:18:13.304 Test: blockdev write zeroes read block ...passed 00:18:13.304 Test: blockdev write zeroes read no split ...passed 00:18:13.304 Test: blockdev write zeroes read split ...passed 00:18:13.304 Test: blockdev write zeroes read split partial ...passed 00:18:13.304 Test: blockdev reset ...[2024-06-07 16:26:40.025814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.304 [2024-06-07 16:26:40.025885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5f560 (9): Bad file descriptor 00:18:13.304 [2024-06-07 16:26:40.127597] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:13.304 passed 00:18:13.304 Test: blockdev write read 8 blocks ...passed 00:18:13.304 Test: blockdev write read size > 128k ...passed 00:18:13.304 Test: blockdev write read invalid size ...passed 00:18:13.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.606 Test: blockdev write read max offset ...passed 00:18:13.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.606 Test: blockdev writev readv 8 blocks ...passed 00:18:13.606 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.606 Test: blockdev writev readv block ...passed 00:18:13.606 Test: blockdev writev readv size > 128k ...passed 00:18:13.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.607 Test: blockdev comparev and writev ...[2024-06-07 16:26:40.355062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.355090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.355105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.355110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.355675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.355683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.355693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.355698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.356228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.356244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.356249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.356748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.356756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.356765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.607 [2024-06-07 16:26:40.356770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.607 passed 00:18:13.607 Test: blockdev nvme passthru rw ...passed 00:18:13.607 Test: blockdev nvme passthru vendor specific ...[2024-06-07 16:26:40.441444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.607 [2024-06-07 16:26:40.441454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.441864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.607 [2024-06-07 16:26:40.441871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.442271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.607 [2024-06-07 16:26:40.442278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.607 [2024-06-07 16:26:40.442678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.607 [2024-06-07 16:26:40.442685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.607 passed 00:18:13.607 Test: blockdev nvme admin passthru ...passed 00:18:13.867 Test: blockdev copy ...passed 00:18:13.867 00:18:13.867 Run Summary: Type Total Ran Passed Failed Inactive 00:18:13.868 suites 1 1 n/a 0 0 00:18:13.868 tests 23 23 23 0 0 00:18:13.868 asserts 152 152 152 0 n/a 00:18:13.868 00:18:13.868 Elapsed time = 1.385 seconds 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.868 rmmod nvme_tcp 00:18:13.868 rmmod nvme_fabrics 00:18:13.868 rmmod nvme_keyring 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3089156 ']' 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3089156 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 3089156 ']' 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 3089156 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:13.868 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3089156 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3089156' 00:18:14.130 killing process with pid 3089156 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 3089156 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 3089156 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.130 16:26:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.676 16:26:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.676 00:18:16.676 real 0m11.771s 00:18:16.676 user 0m14.097s 00:18:16.676 sys 0m5.667s 00:18:16.676 16:26:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:16.676 16:26:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.676 ************************************ 00:18:16.676 END TEST nvmf_bdevio 00:18:16.676 ************************************ 00:18:16.676 16:26:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:16.676 16:26:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:16.676 16:26:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:16.676 16:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.676 ************************************ 00:18:16.676 START TEST nvmf_auth_target 00:18:16.676 ************************************ 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:16.676 * Looking for test storage... 00:18:16.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.676 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.677 16:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.265 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:23.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:23.266 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:23.266 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:23.266 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.266 16:26:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.266 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.266 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:18:23.527 00:18:23.527 --- 10.0.0.2 ping statistics --- 00:18:23.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.527 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.565 ms 00:18:23.527 00:18:23.527 --- 10.0.0.1 ping statistics --- 00:18:23.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.527 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3093840 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3093840 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3093840 ']' 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:23.527 16:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.469 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:24.469 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:24.469 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.469 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3093868 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=null 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=48 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=ae4a0be8b2e1dff88c0736823f6cb26bde0d6b5b36242f0d 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.CPH 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key ae4a0be8b2e1dff88c0736823f6cb26bde0d6b5b36242f0d 0 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 ae4a0be8b2e1dff88c0736823f6cb26bde0d6b5b36242f0d 0 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=ae4a0be8b2e1dff88c0736823f6cb26bde0d6b5b36242f0d 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=0 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.CPH 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.CPH 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.CPH 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha512 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=64 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=97a14b5e0f403c52a9e2968d485d37975207127d0c2cc9ef7ff72c0ac0614eed 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.2Hs 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 97a14b5e0f403c52a9e2968d485d37975207127d0c2cc9ef7ff72c0ac0614eed 3 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 97a14b5e0f403c52a9e2968d485d37975207127d0c2cc9ef7ff72c0ac0614eed 3 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=97a14b5e0f403c52a9e2968d485d37975207127d0c2cc9ef7ff72c0ac0614eed 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=3 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.2Hs 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.2Hs 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.2Hs 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha256 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=32 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=1706eb1dd7f4b9a63d138c71efe1b679 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.aHV 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 1706eb1dd7f4b9a63d138c71efe1b679 1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 1706eb1dd7f4b9a63d138c71efe1b679 1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=1706eb1dd7f4b9a63d138c71efe1b679 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.aHV 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.aHV 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.aHV 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha384 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=48 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=c20bb461cd44f163c2f7e262e69831cd2bd678c135cb5285 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.iHt 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key c20bb461cd44f163c2f7e262e69831cd2bd678c135cb5285 2 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 c20bb461cd44f163c2f7e262e69831cd2bd678c135cb5285 2 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=c20bb461cd44f163c2f7e262e69831cd2bd678c135cb5285 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=2 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.iHt 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.iHt 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.iHt 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha384 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=48 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=ae6aeddc08262e5a1f6f0c03389bbe9f4583591a90988413 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.BQu 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key ae6aeddc08262e5a1f6f0c03389bbe9f4583591a90988413 2 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 ae6aeddc08262e5a1f6f0c03389bbe9f4583591a90988413 2 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=ae6aeddc08262e5a1f6f0c03389bbe9f4583591a90988413 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=2 00:18:24.470 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.BQu 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.BQu 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.BQu 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha256 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=32 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=83a79c3a6efa56420f86d96685101421 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.B6U 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key 83a79c3a6efa56420f86d96685101421 1 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 83a79c3a6efa56420f86d96685101421 1 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=83a79c3a6efa56420f86d96685101421 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=1 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.B6U 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.B6U 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.B6U 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # local digest len file key 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # local -A digests 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=sha512 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # len=64 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@733 -- # key=c5f40e9b9bef6133679c084fa35ce8963f11d8fe0f9f077df42d5ca18d80e881 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.TgY 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@735 -- # format_dhchap_key c5f40e9b9bef6133679c084fa35ce8963f11d8fe0f9f077df42d5ca18d80e881 3 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@725 -- # format_key DHHC-1 c5f40e9b9bef6133679c084fa35ce8963f11d8fe0f9f077df42d5ca18d80e881 3 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@708 -- # local prefix key digest 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # key=c5f40e9b9bef6133679c084fa35ce8963f11d8fe0f9f077df42d5ca18d80e881 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@710 -- # digest=3 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@711 -- # python - 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.TgY 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.TgY 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.TgY 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3093840 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3093840 ']' 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:24.732 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3093868 /var/tmp/host.sock 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3093868 ']' 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:24.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CPH 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CPH 00:18:24.993 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CPH 00:18:25.254 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.2Hs ]] 00:18:25.254 16:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Hs 00:18:25.254 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.254 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.254 16:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.254 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Hs 00:18:25.254 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Hs 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aHV 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.aHV 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.aHV 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.iHt ]] 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHt 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHt 00:18:25.515 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iHt 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BQu 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BQu 00:18:25.775 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BQu 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.B6U ]] 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.B6U 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.B6U 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.B6U 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.TgY 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.TgY 00:18:26.035 16:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.TgY 00:18:26.295 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:26.295 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:26.295 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.295 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.295 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.295 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.555 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.815 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.815 { 00:18:26.815 "cntlid": 1, 00:18:26.815 "qid": 0, 00:18:26.815 "state": "enabled", 00:18:26.815 "listen_address": { 00:18:26.815 "trtype": "TCP", 00:18:26.815 "adrfam": "IPv4", 00:18:26.815 "traddr": "10.0.0.2", 00:18:26.815 "trsvcid": "4420" 00:18:26.815 }, 00:18:26.815 "peer_address": { 00:18:26.815 "trtype": "TCP", 00:18:26.815 "adrfam": "IPv4", 00:18:26.815 "traddr": "10.0.0.1", 00:18:26.815 "trsvcid": "42898" 00:18:26.815 }, 00:18:26.815 "auth": { 00:18:26.815 "state": "completed", 00:18:26.815 "digest": "sha256", 00:18:26.815 "dhgroup": "null" 00:18:26.815 } 00:18:26.815 } 00:18:26.815 ]' 00:18:26.815 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.074 16:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.017 16:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.277 00:18:28.277 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.277 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.277 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.536 { 00:18:28.536 "cntlid": 3, 00:18:28.536 "qid": 0, 00:18:28.536 "state": "enabled", 00:18:28.536 "listen_address": { 00:18:28.536 "trtype": "TCP", 00:18:28.536 "adrfam": "IPv4", 00:18:28.536 "traddr": "10.0.0.2", 00:18:28.536 "trsvcid": "4420" 00:18:28.536 }, 00:18:28.536 "peer_address": { 00:18:28.536 "trtype": "TCP", 00:18:28.536 "adrfam": "IPv4", 00:18:28.536 "traddr": "10.0.0.1", 00:18:28.536 "trsvcid": "42914" 00:18:28.536 }, 00:18:28.536 "auth": { 00:18:28.536 "state": "completed", 00:18:28.536 "digest": "sha256", 00:18:28.536 "dhgroup": "null" 00:18:28.536 } 00:18:28.536 } 00:18:28.536 ]' 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.536 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.794 16:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.735 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.996 00:18:29.996 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.996 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.996 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.258 { 00:18:30.258 "cntlid": 5, 00:18:30.258 "qid": 0, 00:18:30.258 "state": "enabled", 00:18:30.258 "listen_address": { 00:18:30.258 "trtype": "TCP", 00:18:30.258 "adrfam": "IPv4", 00:18:30.258 "traddr": "10.0.0.2", 00:18:30.258 "trsvcid": "4420" 00:18:30.258 }, 00:18:30.258 "peer_address": { 00:18:30.258 "trtype": "TCP", 00:18:30.258 "adrfam": "IPv4", 00:18:30.258 "traddr": "10.0.0.1", 00:18:30.258 "trsvcid": "42950" 00:18:30.258 }, 00:18:30.258 "auth": { 00:18:30.258 "state": "completed", 00:18:30.258 "digest": "sha256", 00:18:30.258 "dhgroup": "null" 00:18:30.258 } 00:18:30.258 } 00:18:30.258 ]' 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:30.258 16:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.258 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.258 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.258 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.517 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:18:31.087 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:31.347 16:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.347 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.608 00:18:31.608 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.608 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.608 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.867 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.868 { 00:18:31.868 "cntlid": 7, 00:18:31.868 "qid": 0, 00:18:31.868 "state": "enabled", 00:18:31.868 "listen_address": { 00:18:31.868 "trtype": "TCP", 00:18:31.868 "adrfam": "IPv4", 00:18:31.868 "traddr": "10.0.0.2", 00:18:31.868 "trsvcid": "4420" 00:18:31.868 }, 00:18:31.868 "peer_address": { 00:18:31.868 "trtype": "TCP", 00:18:31.868 "adrfam": "IPv4", 00:18:31.868 "traddr": "10.0.0.1", 00:18:31.868 "trsvcid": "42970" 00:18:31.868 }, 00:18:31.868 "auth": { 00:18:31.868 "state": "completed", 00:18:31.868 "digest": "sha256", 00:18:31.868 "dhgroup": "null" 00:18:31.868 } 00:18:31.868 } 00:18:31.868 ]' 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.868 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.128 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:18:32.839 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.839 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.839 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.839 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.100 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.100 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.100 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.100 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.100 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.100 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.101 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.101 00:18:33.363 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.363 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.363 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.363 { 00:18:33.363 "cntlid": 9, 00:18:33.363 "qid": 0, 00:18:33.363 "state": "enabled", 00:18:33.363 "listen_address": { 00:18:33.363 "trtype": "TCP", 00:18:33.363 "adrfam": "IPv4", 00:18:33.363 "traddr": "10.0.0.2", 00:18:33.363 "trsvcid": "4420" 00:18:33.363 }, 00:18:33.363 "peer_address": { 00:18:33.363 "trtype": "TCP", 00:18:33.363 "adrfam": "IPv4", 00:18:33.363 "traddr": "10.0.0.1", 00:18:33.363 "trsvcid": "42992" 00:18:33.363 }, 00:18:33.363 "auth": { 00:18:33.363 "state": "completed", 00:18:33.363 "digest": "sha256", 00:18:33.363 "dhgroup": "ffdhe2048" 00:18:33.363 } 00:18:33.363 } 00:18:33.363 ]' 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.363 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.623 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.623 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.623 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.623 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.623 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.623 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.565 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.826 00:18:34.826 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.826 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.826 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.086 { 00:18:35.086 "cntlid": 11, 00:18:35.086 "qid": 0, 00:18:35.086 "state": "enabled", 00:18:35.086 "listen_address": { 00:18:35.086 "trtype": "TCP", 00:18:35.086 "adrfam": "IPv4", 00:18:35.086 "traddr": "10.0.0.2", 00:18:35.086 "trsvcid": "4420" 00:18:35.086 }, 00:18:35.086 "peer_address": { 00:18:35.086 "trtype": "TCP", 00:18:35.086 "adrfam": "IPv4", 00:18:35.086 "traddr": "10.0.0.1", 00:18:35.086 "trsvcid": "43014" 00:18:35.086 }, 00:18:35.086 "auth": { 00:18:35.086 "state": "completed", 00:18:35.086 "digest": "sha256", 00:18:35.086 "dhgroup": "ffdhe2048" 00:18:35.086 } 00:18:35.086 } 00:18:35.086 ]' 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.086 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.345 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.345 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.346 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:36.285 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.285 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.545 00:18:36.545 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.545 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.545 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.805 { 00:18:36.805 "cntlid": 13, 00:18:36.805 "qid": 0, 00:18:36.805 "state": "enabled", 00:18:36.805 "listen_address": { 00:18:36.805 "trtype": "TCP", 00:18:36.805 "adrfam": "IPv4", 00:18:36.805 "traddr": "10.0.0.2", 00:18:36.805 "trsvcid": "4420" 00:18:36.805 }, 00:18:36.805 "peer_address": { 00:18:36.805 "trtype": "TCP", 00:18:36.805 "adrfam": "IPv4", 00:18:36.805 "traddr": "10.0.0.1", 00:18:36.805 "trsvcid": "53624" 00:18:36.805 }, 00:18:36.805 "auth": { 00:18:36.805 "state": "completed", 00:18:36.805 "digest": "sha256", 00:18:36.805 "dhgroup": "ffdhe2048" 00:18:36.805 } 00:18:36.805 } 00:18:36.805 ]' 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.805 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.066 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.637 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.898 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:37.898 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.898 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.898 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.898 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.898 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.899 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:37.899 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.899 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.899 16:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.899 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.899 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.160 00:18:38.160 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.160 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.160 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.421 { 00:18:38.421 "cntlid": 15, 00:18:38.421 "qid": 0, 00:18:38.421 "state": "enabled", 00:18:38.421 "listen_address": { 00:18:38.421 "trtype": "TCP", 00:18:38.421 "adrfam": "IPv4", 00:18:38.421 "traddr": "10.0.0.2", 00:18:38.421 "trsvcid": "4420" 00:18:38.421 }, 00:18:38.421 "peer_address": { 00:18:38.421 "trtype": "TCP", 00:18:38.421 "adrfam": "IPv4", 00:18:38.421 "traddr": "10.0.0.1", 00:18:38.421 "trsvcid": "53650" 00:18:38.421 }, 00:18:38.421 "auth": { 00:18:38.421 "state": "completed", 00:18:38.421 "digest": "sha256", 00:18:38.421 "dhgroup": "ffdhe2048" 00:18:38.421 } 00:18:38.421 } 00:18:38.421 ]' 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.421 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.683 16:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:18:39.255 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.255 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.255 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.255 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.516 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.777 00:18:39.777 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.777 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.777 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.038 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.038 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.038 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.038 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.038 16:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.038 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.038 { 00:18:40.038 "cntlid": 17, 00:18:40.038 "qid": 0, 00:18:40.038 "state": "enabled", 00:18:40.039 "listen_address": { 00:18:40.039 "trtype": "TCP", 00:18:40.039 "adrfam": "IPv4", 00:18:40.039 "traddr": "10.0.0.2", 00:18:40.039 "trsvcid": "4420" 00:18:40.039 }, 00:18:40.039 "peer_address": { 00:18:40.039 "trtype": "TCP", 00:18:40.039 "adrfam": "IPv4", 00:18:40.039 "traddr": "10.0.0.1", 00:18:40.039 "trsvcid": "53682" 00:18:40.039 }, 00:18:40.039 "auth": { 00:18:40.039 "state": "completed", 00:18:40.039 "digest": "sha256", 00:18:40.039 "dhgroup": "ffdhe3072" 00:18:40.039 } 00:18:40.039 } 00:18:40.039 ]' 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.039 16:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.301 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.244 16:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.505 00:18:41.505 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.505 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.505 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.767 { 00:18:41.767 "cntlid": 19, 00:18:41.767 "qid": 0, 00:18:41.767 "state": "enabled", 00:18:41.767 "listen_address": { 00:18:41.767 "trtype": "TCP", 00:18:41.767 "adrfam": "IPv4", 00:18:41.767 "traddr": "10.0.0.2", 00:18:41.767 "trsvcid": "4420" 00:18:41.767 }, 00:18:41.767 "peer_address": { 00:18:41.767 "trtype": "TCP", 00:18:41.767 "adrfam": "IPv4", 00:18:41.767 "traddr": "10.0.0.1", 00:18:41.767 "trsvcid": "53714" 00:18:41.767 }, 00:18:41.767 "auth": { 00:18:41.767 "state": "completed", 00:18:41.767 "digest": "sha256", 00:18:41.767 "dhgroup": "ffdhe3072" 00:18:41.767 } 00:18:41.767 } 00:18:41.767 ]' 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.767 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.028 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:18:42.600 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.860 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.121 00:18:43.121 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.121 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.121 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.381 { 00:18:43.381 "cntlid": 21, 00:18:43.381 "qid": 0, 00:18:43.381 "state": "enabled", 00:18:43.381 "listen_address": { 00:18:43.381 "trtype": "TCP", 00:18:43.381 "adrfam": "IPv4", 00:18:43.381 "traddr": "10.0.0.2", 00:18:43.381 "trsvcid": "4420" 00:18:43.381 }, 00:18:43.381 "peer_address": { 00:18:43.381 "trtype": "TCP", 00:18:43.381 "adrfam": "IPv4", 00:18:43.381 "traddr": "10.0.0.1", 00:18:43.381 "trsvcid": "53738" 00:18:43.381 }, 00:18:43.381 "auth": { 00:18:43.381 "state": "completed", 00:18:43.381 "digest": "sha256", 00:18:43.381 "dhgroup": "ffdhe3072" 00:18:43.381 } 00:18:43.381 } 00:18:43.381 ]' 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.381 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.642 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.584 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.585 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.585 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.845 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.845 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.105 { 00:18:45.105 "cntlid": 23, 00:18:45.105 "qid": 0, 00:18:45.105 "state": "enabled", 00:18:45.105 "listen_address": { 00:18:45.105 "trtype": "TCP", 00:18:45.105 "adrfam": "IPv4", 00:18:45.105 "traddr": "10.0.0.2", 00:18:45.105 "trsvcid": "4420" 00:18:45.105 }, 00:18:45.105 "peer_address": { 00:18:45.105 "trtype": "TCP", 00:18:45.105 "adrfam": "IPv4", 00:18:45.105 "traddr": "10.0.0.1", 00:18:45.105 "trsvcid": "53760" 00:18:45.105 }, 00:18:45.105 "auth": { 00:18:45.105 "state": "completed", 00:18:45.105 "digest": "sha256", 00:18:45.105 "dhgroup": "ffdhe3072" 00:18:45.105 } 00:18:45.105 } 00:18:45.105 ]' 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.105 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.365 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:45.936 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.196 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.457 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.457 { 00:18:46.457 "cntlid": 25, 00:18:46.457 "qid": 0, 00:18:46.457 "state": "enabled", 00:18:46.457 "listen_address": { 00:18:46.457 "trtype": "TCP", 00:18:46.457 "adrfam": "IPv4", 00:18:46.457 "traddr": "10.0.0.2", 00:18:46.457 "trsvcid": "4420" 00:18:46.457 }, 00:18:46.457 "peer_address": { 00:18:46.457 "trtype": "TCP", 00:18:46.457 "adrfam": "IPv4", 00:18:46.457 "traddr": "10.0.0.1", 00:18:46.457 "trsvcid": "54922" 00:18:46.457 }, 00:18:46.457 "auth": { 00:18:46.457 "state": "completed", 00:18:46.457 "digest": "sha256", 00:18:46.457 "dhgroup": "ffdhe4096" 00:18:46.457 } 00:18:46.457 } 00:18:46.457 ]' 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.457 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.717 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.697 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.958 00:18:47.958 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.958 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.958 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.218 { 00:18:48.218 "cntlid": 27, 00:18:48.218 "qid": 0, 00:18:48.218 "state": "enabled", 00:18:48.218 "listen_address": { 00:18:48.218 "trtype": "TCP", 00:18:48.218 "adrfam": "IPv4", 00:18:48.218 "traddr": "10.0.0.2", 00:18:48.218 "trsvcid": "4420" 00:18:48.218 }, 00:18:48.218 "peer_address": { 00:18:48.218 "trtype": "TCP", 00:18:48.218 "adrfam": "IPv4", 00:18:48.218 "traddr": "10.0.0.1", 00:18:48.218 "trsvcid": "54946" 00:18:48.218 }, 00:18:48.218 "auth": { 00:18:48.218 "state": "completed", 00:18:48.218 "digest": "sha256", 00:18:48.218 "dhgroup": "ffdhe4096" 00:18:48.218 } 00:18:48.218 } 00:18:48.218 ]' 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.218 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.218 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.218 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.218 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.478 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.420 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.420 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.681 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.681 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.941 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.941 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.941 { 00:18:49.941 "cntlid": 29, 00:18:49.941 "qid": 0, 00:18:49.941 "state": "enabled", 00:18:49.941 "listen_address": { 00:18:49.942 "trtype": "TCP", 00:18:49.942 "adrfam": "IPv4", 00:18:49.942 "traddr": "10.0.0.2", 00:18:49.942 "trsvcid": "4420" 00:18:49.942 }, 00:18:49.942 "peer_address": { 00:18:49.942 "trtype": "TCP", 00:18:49.942 "adrfam": "IPv4", 00:18:49.942 "traddr": "10.0.0.1", 00:18:49.942 "trsvcid": "54972" 00:18:49.942 }, 00:18:49.942 "auth": { 00:18:49.942 "state": "completed", 00:18:49.942 "digest": "sha256", 00:18:49.942 "dhgroup": "ffdhe4096" 00:18:49.942 } 00:18:49.942 } 00:18:49.942 ]' 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.942 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.202 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.773 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.033 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.294 00:18:51.294 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.294 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.294 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.294 { 00:18:51.294 "cntlid": 31, 00:18:51.294 "qid": 0, 00:18:51.294 "state": "enabled", 00:18:51.294 "listen_address": { 00:18:51.294 "trtype": "TCP", 00:18:51.294 "adrfam": "IPv4", 00:18:51.294 "traddr": "10.0.0.2", 00:18:51.294 "trsvcid": "4420" 00:18:51.294 }, 00:18:51.294 "peer_address": { 00:18:51.294 "trtype": "TCP", 00:18:51.294 "adrfam": "IPv4", 00:18:51.294 "traddr": "10.0.0.1", 00:18:51.294 "trsvcid": "55006" 00:18:51.294 }, 00:18:51.294 "auth": { 00:18:51.294 "state": "completed", 00:18:51.294 "digest": "sha256", 00:18:51.294 "dhgroup": "ffdhe4096" 00:18:51.294 } 00:18:51.294 } 00:18:51.294 ]' 00:18:51.294 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.555 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.496 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.757 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.018 00:18:53.018 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.018 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.018 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.278 { 00:18:53.278 "cntlid": 33, 00:18:53.278 "qid": 0, 00:18:53.278 "state": "enabled", 00:18:53.278 "listen_address": { 00:18:53.278 "trtype": "TCP", 00:18:53.278 "adrfam": "IPv4", 00:18:53.278 "traddr": "10.0.0.2", 00:18:53.278 "trsvcid": "4420" 00:18:53.278 }, 00:18:53.278 "peer_address": { 00:18:53.278 "trtype": "TCP", 00:18:53.278 "adrfam": "IPv4", 00:18:53.278 "traddr": "10.0.0.1", 00:18:53.278 "trsvcid": "55026" 00:18:53.278 }, 00:18:53.278 "auth": { 00:18:53.278 "state": "completed", 00:18:53.278 "digest": "sha256", 00:18:53.278 "dhgroup": "ffdhe6144" 00:18:53.278 } 00:18:53.278 } 00:18:53.278 ]' 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.278 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.278 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.278 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.278 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.539 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:54.110 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.372 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.632 00:18:54.632 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.632 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.632 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.893 { 00:18:54.893 "cntlid": 35, 00:18:54.893 "qid": 0, 00:18:54.893 "state": "enabled", 00:18:54.893 "listen_address": { 00:18:54.893 "trtype": "TCP", 00:18:54.893 "adrfam": "IPv4", 00:18:54.893 "traddr": "10.0.0.2", 00:18:54.893 "trsvcid": "4420" 00:18:54.893 }, 00:18:54.893 "peer_address": { 00:18:54.893 "trtype": "TCP", 00:18:54.893 "adrfam": "IPv4", 00:18:54.893 "traddr": "10.0.0.1", 00:18:54.893 "trsvcid": "55044" 00:18:54.893 }, 00:18:54.893 "auth": { 00:18:54.893 "state": "completed", 00:18:54.893 "digest": "sha256", 00:18:54.893 "dhgroup": "ffdhe6144" 00:18:54.893 } 00:18:54.893 } 00:18:54.893 ]' 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.893 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.154 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.154 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.154 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.154 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.095 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.355 00:18:56.614 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.615 { 00:18:56.615 "cntlid": 37, 00:18:56.615 "qid": 0, 00:18:56.615 "state": "enabled", 00:18:56.615 "listen_address": { 00:18:56.615 "trtype": "TCP", 00:18:56.615 "adrfam": "IPv4", 00:18:56.615 "traddr": "10.0.0.2", 00:18:56.615 "trsvcid": "4420" 00:18:56.615 }, 00:18:56.615 "peer_address": { 00:18:56.615 "trtype": "TCP", 00:18:56.615 "adrfam": "IPv4", 00:18:56.615 "traddr": "10.0.0.1", 00:18:56.615 "trsvcid": "58956" 00:18:56.615 }, 00:18:56.615 "auth": { 00:18:56.615 "state": "completed", 00:18:56.615 "digest": "sha256", 00:18:56.615 "dhgroup": "ffdhe6144" 00:18:56.615 } 00:18:56.615 } 00:18:56.615 ]' 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.615 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.875 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.875 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.875 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.875 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.875 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.875 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.817 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.387 00:18:58.387 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.387 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.387 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.387 { 00:18:58.387 "cntlid": 39, 00:18:58.387 "qid": 0, 00:18:58.387 "state": "enabled", 00:18:58.387 "listen_address": { 00:18:58.387 "trtype": "TCP", 00:18:58.387 "adrfam": "IPv4", 00:18:58.387 "traddr": "10.0.0.2", 00:18:58.387 "trsvcid": "4420" 00:18:58.387 }, 00:18:58.387 "peer_address": { 00:18:58.387 "trtype": "TCP", 00:18:58.387 "adrfam": "IPv4", 00:18:58.387 "traddr": "10.0.0.1", 00:18:58.387 "trsvcid": "58978" 00:18:58.387 }, 00:18:58.387 "auth": { 00:18:58.387 "state": "completed", 00:18:58.387 "digest": "sha256", 00:18:58.387 "dhgroup": "ffdhe6144" 00:18:58.387 } 00:18:58.387 } 00:18:58.387 ]' 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.387 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.647 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.647 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.647 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.647 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.647 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.647 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.588 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.160 00:19:00.160 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.160 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.160 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.420 { 00:19:00.420 "cntlid": 41, 00:19:00.420 "qid": 0, 00:19:00.420 "state": "enabled", 00:19:00.420 "listen_address": { 00:19:00.420 "trtype": "TCP", 00:19:00.420 "adrfam": "IPv4", 00:19:00.420 "traddr": "10.0.0.2", 00:19:00.420 "trsvcid": "4420" 00:19:00.420 }, 00:19:00.420 "peer_address": { 00:19:00.420 "trtype": "TCP", 00:19:00.420 "adrfam": "IPv4", 00:19:00.420 "traddr": "10.0.0.1", 00:19:00.420 "trsvcid": "59008" 00:19:00.420 }, 00:19:00.420 "auth": { 00:19:00.420 "state": "completed", 00:19:00.420 "digest": "sha256", 00:19:00.420 "dhgroup": "ffdhe8192" 00:19:00.420 } 00:19:00.420 } 00:19:00.420 ]' 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.420 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.680 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.251 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.252 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.534 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.535 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.106 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.106 { 00:19:02.106 "cntlid": 43, 00:19:02.106 "qid": 0, 00:19:02.106 "state": "enabled", 00:19:02.106 "listen_address": { 00:19:02.106 "trtype": "TCP", 00:19:02.106 "adrfam": "IPv4", 00:19:02.106 "traddr": "10.0.0.2", 00:19:02.106 "trsvcid": "4420" 00:19:02.106 }, 00:19:02.106 "peer_address": { 00:19:02.106 "trtype": "TCP", 00:19:02.106 "adrfam": "IPv4", 00:19:02.106 "traddr": "10.0.0.1", 00:19:02.106 "trsvcid": "59046" 00:19:02.106 }, 00:19:02.106 "auth": { 00:19:02.106 "state": "completed", 00:19:02.106 "digest": "sha256", 00:19:02.106 "dhgroup": "ffdhe8192" 00:19:02.106 } 00:19:02.106 } 00:19:02.106 ]' 00:19:02.106 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.406 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.406 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.406 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.406 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.406 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.406 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.406 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.406 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.348 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.348 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.919 00:19:03.919 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.919 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.919 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.180 { 00:19:04.180 "cntlid": 45, 00:19:04.180 "qid": 0, 00:19:04.180 "state": "enabled", 00:19:04.180 "listen_address": { 00:19:04.180 "trtype": "TCP", 00:19:04.180 "adrfam": "IPv4", 00:19:04.180 "traddr": "10.0.0.2", 00:19:04.180 "trsvcid": "4420" 00:19:04.180 }, 00:19:04.180 "peer_address": { 00:19:04.180 "trtype": "TCP", 00:19:04.180 "adrfam": "IPv4", 00:19:04.180 "traddr": "10.0.0.1", 00:19:04.180 "trsvcid": "59076" 00:19:04.180 }, 00:19:04.180 "auth": { 00:19:04.180 "state": "completed", 00:19:04.180 "digest": "sha256", 00:19:04.180 "dhgroup": "ffdhe8192" 00:19:04.180 } 00:19:04.180 } 00:19:04.180 ]' 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.180 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.441 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:05.013 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.013 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.013 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.013 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.275 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.275 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.275 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:05.275 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.275 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.847 00:19:05.847 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.847 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.847 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.107 { 00:19:06.107 "cntlid": 47, 00:19:06.107 "qid": 0, 00:19:06.107 "state": "enabled", 00:19:06.107 "listen_address": { 00:19:06.107 "trtype": "TCP", 00:19:06.107 "adrfam": "IPv4", 00:19:06.107 "traddr": "10.0.0.2", 00:19:06.107 "trsvcid": "4420" 00:19:06.107 }, 00:19:06.107 "peer_address": { 00:19:06.107 "trtype": "TCP", 00:19:06.107 "adrfam": "IPv4", 00:19:06.107 "traddr": "10.0.0.1", 00:19:06.107 "trsvcid": "59092" 00:19:06.107 }, 00:19:06.107 "auth": { 00:19:06.107 "state": "completed", 00:19:06.107 "digest": "sha256", 00:19:06.107 "dhgroup": "ffdhe8192" 00:19:06.107 } 00:19:06.107 } 00:19:06.107 ]' 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.107 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.108 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.108 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.108 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.108 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.108 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.108 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.368 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:07.309 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.310 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.310 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.310 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.310 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.571 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.571 { 00:19:07.571 "cntlid": 49, 00:19:07.571 "qid": 0, 00:19:07.571 "state": "enabled", 00:19:07.571 "listen_address": { 00:19:07.571 "trtype": "TCP", 00:19:07.571 "adrfam": "IPv4", 00:19:07.571 "traddr": "10.0.0.2", 00:19:07.571 "trsvcid": "4420" 00:19:07.571 }, 00:19:07.571 "peer_address": { 00:19:07.571 "trtype": "TCP", 00:19:07.571 "adrfam": "IPv4", 00:19:07.571 "traddr": "10.0.0.1", 00:19:07.571 "trsvcid": "46608" 00:19:07.571 }, 00:19:07.571 "auth": { 00:19:07.571 "state": "completed", 00:19:07.571 "digest": "sha384", 00:19:07.571 "dhgroup": "null" 00:19:07.571 } 00:19:07.571 } 00:19:07.571 ]' 00:19:07.571 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.832 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.833 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.833 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:07.833 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.833 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.833 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.833 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.095 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.667 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.927 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.188 00:19:09.188 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.188 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.189 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.451 { 00:19:09.451 "cntlid": 51, 00:19:09.451 "qid": 0, 00:19:09.451 "state": "enabled", 00:19:09.451 "listen_address": { 00:19:09.451 "trtype": "TCP", 00:19:09.451 "adrfam": "IPv4", 00:19:09.451 "traddr": "10.0.0.2", 00:19:09.451 "trsvcid": "4420" 00:19:09.451 }, 00:19:09.451 "peer_address": { 00:19:09.451 "trtype": "TCP", 00:19:09.451 "adrfam": "IPv4", 00:19:09.451 "traddr": "10.0.0.1", 00:19:09.451 "trsvcid": "46620" 00:19:09.451 }, 00:19:09.451 "auth": { 00:19:09.451 "state": "completed", 00:19:09.451 "digest": "sha384", 00:19:09.451 "dhgroup": "null" 00:19:09.451 } 00:19:09.451 } 00:19:09.451 ]' 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.451 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.712 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:10.283 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.283 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.283 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.544 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.805 00:19:10.805 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.805 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.805 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.065 { 00:19:11.065 "cntlid": 53, 00:19:11.065 "qid": 0, 00:19:11.065 "state": "enabled", 00:19:11.065 "listen_address": { 00:19:11.065 "trtype": "TCP", 00:19:11.065 "adrfam": "IPv4", 00:19:11.065 "traddr": "10.0.0.2", 00:19:11.065 "trsvcid": "4420" 00:19:11.065 }, 00:19:11.065 "peer_address": { 00:19:11.065 "trtype": "TCP", 00:19:11.065 "adrfam": "IPv4", 00:19:11.065 "traddr": "10.0.0.1", 00:19:11.065 "trsvcid": "46646" 00:19:11.065 }, 00:19:11.065 "auth": { 00:19:11.065 "state": "completed", 00:19:11.065 "digest": "sha384", 00:19:11.065 "dhgroup": "null" 00:19:11.065 } 00:19:11.065 } 00:19:11.065 ]' 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.065 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.066 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.066 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.066 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.066 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.066 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.326 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.268 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.529 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.529 { 00:19:12.529 "cntlid": 55, 00:19:12.529 "qid": 0, 00:19:12.529 "state": "enabled", 00:19:12.529 "listen_address": { 00:19:12.529 "trtype": "TCP", 00:19:12.529 "adrfam": "IPv4", 00:19:12.529 "traddr": "10.0.0.2", 00:19:12.529 "trsvcid": "4420" 00:19:12.529 }, 00:19:12.529 "peer_address": { 00:19:12.529 "trtype": "TCP", 00:19:12.529 "adrfam": "IPv4", 00:19:12.529 "traddr": "10.0.0.1", 00:19:12.529 "trsvcid": "46670" 00:19:12.529 }, 00:19:12.529 "auth": { 00:19:12.529 "state": "completed", 00:19:12.529 "digest": "sha384", 00:19:12.529 "dhgroup": "null" 00:19:12.529 } 00:19:12.529 } 00:19:12.529 ]' 00:19:12.529 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.789 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.733 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.994 00:19:13.994 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.994 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.994 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.255 { 00:19:14.255 "cntlid": 57, 00:19:14.255 "qid": 0, 00:19:14.255 "state": "enabled", 00:19:14.255 "listen_address": { 00:19:14.255 "trtype": "TCP", 00:19:14.255 "adrfam": "IPv4", 00:19:14.255 "traddr": "10.0.0.2", 00:19:14.255 "trsvcid": "4420" 00:19:14.255 }, 00:19:14.255 "peer_address": { 00:19:14.255 "trtype": "TCP", 00:19:14.255 "adrfam": "IPv4", 00:19:14.255 "traddr": "10.0.0.1", 00:19:14.255 "trsvcid": "46700" 00:19:14.255 }, 00:19:14.255 "auth": { 00:19:14.255 "state": "completed", 00:19:14.255 "digest": "sha384", 00:19:14.255 "dhgroup": "ffdhe2048" 00:19:14.255 } 00:19:14.255 } 00:19:14.255 ]' 00:19:14.255 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.255 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.255 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.255 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.255 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.516 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.516 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.516 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.516 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.460 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.721 00:19:15.721 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.721 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.721 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.982 { 00:19:15.982 "cntlid": 59, 00:19:15.982 "qid": 0, 00:19:15.982 "state": "enabled", 00:19:15.982 "listen_address": { 00:19:15.982 "trtype": "TCP", 00:19:15.982 "adrfam": "IPv4", 00:19:15.982 "traddr": "10.0.0.2", 00:19:15.982 "trsvcid": "4420" 00:19:15.982 }, 00:19:15.982 "peer_address": { 00:19:15.982 "trtype": "TCP", 00:19:15.982 "adrfam": "IPv4", 00:19:15.982 "traddr": "10.0.0.1", 00:19:15.982 "trsvcid": "46728" 00:19:15.982 }, 00:19:15.982 "auth": { 00:19:15.982 "state": "completed", 00:19:15.982 "digest": "sha384", 00:19:15.982 "dhgroup": "ffdhe2048" 00:19:15.982 } 00:19:15.982 } 00:19:15.982 ]' 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.982 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.242 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.185 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.492 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.492 { 00:19:17.492 "cntlid": 61, 00:19:17.492 "qid": 0, 00:19:17.492 "state": "enabled", 00:19:17.492 "listen_address": { 00:19:17.492 "trtype": "TCP", 00:19:17.492 "adrfam": "IPv4", 00:19:17.492 "traddr": "10.0.0.2", 00:19:17.492 "trsvcid": "4420" 00:19:17.492 }, 00:19:17.492 "peer_address": { 00:19:17.492 "trtype": "TCP", 00:19:17.492 "adrfam": "IPv4", 00:19:17.492 "traddr": "10.0.0.1", 00:19:17.492 "trsvcid": "49298" 00:19:17.492 }, 00:19:17.492 "auth": { 00:19:17.492 "state": "completed", 00:19:17.492 "digest": "sha384", 00:19:17.492 "dhgroup": "ffdhe2048" 00:19:17.492 } 00:19:17.492 } 00:19:17.492 ]' 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.492 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.752 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.752 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.752 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.752 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.752 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.752 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.696 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.957 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.957 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.218 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.218 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.218 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.218 { 00:19:19.218 "cntlid": 63, 00:19:19.218 "qid": 0, 00:19:19.218 "state": "enabled", 00:19:19.218 "listen_address": { 00:19:19.218 "trtype": "TCP", 00:19:19.218 "adrfam": "IPv4", 00:19:19.218 "traddr": "10.0.0.2", 00:19:19.218 "trsvcid": "4420" 00:19:19.218 }, 00:19:19.218 "peer_address": { 00:19:19.218 "trtype": "TCP", 00:19:19.218 "adrfam": "IPv4", 00:19:19.218 "traddr": "10.0.0.1", 00:19:19.218 "trsvcid": "49330" 00:19:19.218 }, 00:19:19.218 "auth": { 00:19:19.218 "state": "completed", 00:19:19.218 "digest": "sha384", 00:19:19.218 "dhgroup": "ffdhe2048" 00:19:19.218 } 00:19:19.218 } 00:19:19.218 ]' 00:19:19.218 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.218 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.218 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.480 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.480 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.480 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.480 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.480 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.480 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.422 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.682 00:19:20.682 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.682 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.682 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.942 { 00:19:20.942 "cntlid": 65, 00:19:20.942 "qid": 0, 00:19:20.942 "state": "enabled", 00:19:20.942 "listen_address": { 00:19:20.942 "trtype": "TCP", 00:19:20.942 "adrfam": "IPv4", 00:19:20.942 "traddr": "10.0.0.2", 00:19:20.942 "trsvcid": "4420" 00:19:20.942 }, 00:19:20.942 "peer_address": { 00:19:20.942 "trtype": "TCP", 00:19:20.942 "adrfam": "IPv4", 00:19:20.942 "traddr": "10.0.0.1", 00:19:20.942 "trsvcid": "49354" 00:19:20.942 }, 00:19:20.942 "auth": { 00:19:20.942 "state": "completed", 00:19:20.942 "digest": "sha384", 00:19:20.942 "dhgroup": "ffdhe3072" 00:19:20.942 } 00:19:20.942 } 00:19:20.942 ]' 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.942 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.203 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.146 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.407 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.407 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.668 { 00:19:22.668 "cntlid": 67, 00:19:22.668 "qid": 0, 00:19:22.668 "state": "enabled", 00:19:22.668 "listen_address": { 00:19:22.668 "trtype": "TCP", 00:19:22.668 "adrfam": "IPv4", 00:19:22.668 "traddr": "10.0.0.2", 00:19:22.668 "trsvcid": "4420" 00:19:22.668 }, 00:19:22.668 "peer_address": { 00:19:22.668 "trtype": "TCP", 00:19:22.668 "adrfam": "IPv4", 00:19:22.668 "traddr": "10.0.0.1", 00:19:22.668 "trsvcid": "49396" 00:19:22.668 }, 00:19:22.668 "auth": { 00:19:22.668 "state": "completed", 00:19:22.668 "digest": "sha384", 00:19:22.668 "dhgroup": "ffdhe3072" 00:19:22.668 } 00:19:22.668 } 00:19:22.668 ]' 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.668 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.929 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.502 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.764 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.025 00:19:24.025 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.025 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.025 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.285 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.285 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.285 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.285 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.286 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.286 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.286 { 00:19:24.286 "cntlid": 69, 00:19:24.286 "qid": 0, 00:19:24.286 "state": "enabled", 00:19:24.286 "listen_address": { 00:19:24.286 "trtype": "TCP", 00:19:24.286 "adrfam": "IPv4", 00:19:24.286 "traddr": "10.0.0.2", 00:19:24.286 "trsvcid": "4420" 00:19:24.286 }, 00:19:24.286 "peer_address": { 00:19:24.286 "trtype": "TCP", 00:19:24.286 "adrfam": "IPv4", 00:19:24.286 "traddr": "10.0.0.1", 00:19:24.286 "trsvcid": "49426" 00:19:24.286 }, 00:19:24.286 "auth": { 00:19:24.286 "state": "completed", 00:19:24.286 "digest": "sha384", 00:19:24.286 "dhgroup": "ffdhe3072" 00:19:24.286 } 00:19:24.286 } 00:19:24.286 ]' 00:19:24.286 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.286 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.286 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.286 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.286 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.286 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.286 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.286 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.546 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:25.117 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.378 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.378 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.639 00:19:25.639 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.639 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.639 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.900 { 00:19:25.900 "cntlid": 71, 00:19:25.900 "qid": 0, 00:19:25.900 "state": "enabled", 00:19:25.900 "listen_address": { 00:19:25.900 "trtype": "TCP", 00:19:25.900 "adrfam": "IPv4", 00:19:25.900 "traddr": "10.0.0.2", 00:19:25.900 "trsvcid": "4420" 00:19:25.900 }, 00:19:25.900 "peer_address": { 00:19:25.900 "trtype": "TCP", 00:19:25.900 "adrfam": "IPv4", 00:19:25.900 "traddr": "10.0.0.1", 00:19:25.900 "trsvcid": "49460" 00:19:25.900 }, 00:19:25.900 "auth": { 00:19:25.900 "state": "completed", 00:19:25.900 "digest": "sha384", 00:19:25.900 "dhgroup": "ffdhe3072" 00:19:25.900 } 00:19:25.900 } 00:19:25.900 ]' 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.900 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.160 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.099 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.359 00:19:27.359 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.359 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.359 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.624 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.624 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.624 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.624 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.624 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.624 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.624 { 00:19:27.624 "cntlid": 73, 00:19:27.624 "qid": 0, 00:19:27.624 "state": "enabled", 00:19:27.624 "listen_address": { 00:19:27.624 "trtype": "TCP", 00:19:27.624 "adrfam": "IPv4", 00:19:27.624 "traddr": "10.0.0.2", 00:19:27.624 "trsvcid": "4420" 00:19:27.625 }, 00:19:27.625 "peer_address": { 00:19:27.625 "trtype": "TCP", 00:19:27.625 "adrfam": "IPv4", 00:19:27.625 "traddr": "10.0.0.1", 00:19:27.625 "trsvcid": "40664" 00:19:27.625 }, 00:19:27.625 "auth": { 00:19:27.625 "state": "completed", 00:19:27.625 "digest": "sha384", 00:19:27.625 "dhgroup": "ffdhe4096" 00:19:27.625 } 00:19:27.625 } 00:19:27.625 ]' 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.625 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.883 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.452 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.712 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.974 00:19:28.974 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.974 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.974 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.235 { 00:19:29.235 "cntlid": 75, 00:19:29.235 "qid": 0, 00:19:29.235 "state": "enabled", 00:19:29.235 "listen_address": { 00:19:29.235 "trtype": "TCP", 00:19:29.235 "adrfam": "IPv4", 00:19:29.235 "traddr": "10.0.0.2", 00:19:29.235 "trsvcid": "4420" 00:19:29.235 }, 00:19:29.235 "peer_address": { 00:19:29.235 "trtype": "TCP", 00:19:29.235 "adrfam": "IPv4", 00:19:29.235 "traddr": "10.0.0.1", 00:19:29.235 "trsvcid": "40700" 00:19:29.235 }, 00:19:29.235 "auth": { 00:19:29.235 "state": "completed", 00:19:29.235 "digest": "sha384", 00:19:29.235 "dhgroup": "ffdhe4096" 00:19:29.235 } 00:19:29.235 } 00:19:29.235 ]' 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.235 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.235 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.235 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.235 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.496 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.066 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.326 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.587 00:19:30.587 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.587 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.587 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.847 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.847 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.847 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.847 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.847 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.847 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.847 { 00:19:30.847 "cntlid": 77, 00:19:30.847 "qid": 0, 00:19:30.847 "state": "enabled", 00:19:30.847 "listen_address": { 00:19:30.848 "trtype": "TCP", 00:19:30.848 "adrfam": "IPv4", 00:19:30.848 "traddr": "10.0.0.2", 00:19:30.848 "trsvcid": "4420" 00:19:30.848 }, 00:19:30.848 "peer_address": { 00:19:30.848 "trtype": "TCP", 00:19:30.848 "adrfam": "IPv4", 00:19:30.848 "traddr": "10.0.0.1", 00:19:30.848 "trsvcid": "40724" 00:19:30.848 }, 00:19:30.848 "auth": { 00:19:30.848 "state": "completed", 00:19:30.848 "digest": "sha384", 00:19:30.848 "dhgroup": "ffdhe4096" 00:19:30.848 } 00:19:30.848 } 00:19:30.848 ]' 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.848 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.108 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.679 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.939 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.940 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.234 00:19:32.234 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.234 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.234 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.234 { 00:19:32.234 "cntlid": 79, 00:19:32.234 "qid": 0, 00:19:32.234 "state": "enabled", 00:19:32.234 "listen_address": { 00:19:32.234 "trtype": "TCP", 00:19:32.234 "adrfam": "IPv4", 00:19:32.234 "traddr": "10.0.0.2", 00:19:32.234 "trsvcid": "4420" 00:19:32.234 }, 00:19:32.234 "peer_address": { 00:19:32.234 "trtype": "TCP", 00:19:32.234 "adrfam": "IPv4", 00:19:32.234 "traddr": "10.0.0.1", 00:19:32.234 "trsvcid": "40742" 00:19:32.234 }, 00:19:32.234 "auth": { 00:19:32.234 "state": "completed", 00:19:32.234 "digest": "sha384", 00:19:32.234 "dhgroup": "ffdhe4096" 00:19:32.234 } 00:19:32.234 } 00:19:32.234 ]' 00:19:32.234 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.494 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.754 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.325 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.585 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.846 00:19:33.846 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.846 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.846 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.106 { 00:19:34.106 "cntlid": 81, 00:19:34.106 "qid": 0, 00:19:34.106 "state": "enabled", 00:19:34.106 "listen_address": { 00:19:34.106 "trtype": "TCP", 00:19:34.106 "adrfam": "IPv4", 00:19:34.106 "traddr": "10.0.0.2", 00:19:34.106 "trsvcid": "4420" 00:19:34.106 }, 00:19:34.106 "peer_address": { 00:19:34.106 "trtype": "TCP", 00:19:34.106 "adrfam": "IPv4", 00:19:34.106 "traddr": "10.0.0.1", 00:19:34.106 "trsvcid": "40766" 00:19:34.106 }, 00:19:34.106 "auth": { 00:19:34.106 "state": "completed", 00:19:34.106 "digest": "sha384", 00:19:34.106 "dhgroup": "ffdhe6144" 00:19:34.106 } 00:19:34.106 } 00:19:34.106 ]' 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.106 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.366 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:34.936 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.197 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.458 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.718 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.718 { 00:19:35.718 "cntlid": 83, 00:19:35.718 "qid": 0, 00:19:35.718 "state": "enabled", 00:19:35.718 "listen_address": { 00:19:35.718 "trtype": "TCP", 00:19:35.718 "adrfam": "IPv4", 00:19:35.718 "traddr": "10.0.0.2", 00:19:35.718 "trsvcid": "4420" 00:19:35.718 }, 00:19:35.718 "peer_address": { 00:19:35.718 "trtype": "TCP", 00:19:35.718 "adrfam": "IPv4", 00:19:35.719 "traddr": "10.0.0.1", 00:19:35.719 "trsvcid": "40800" 00:19:35.719 }, 00:19:35.719 "auth": { 00:19:35.719 "state": "completed", 00:19:35.719 "digest": "sha384", 00:19:35.719 "dhgroup": "ffdhe6144" 00:19:35.719 } 00:19:35.719 } 00:19:35.719 ]' 00:19:35.719 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.719 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.719 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.980 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.980 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.980 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.980 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.980 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.980 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.927 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.187 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.447 { 00:19:37.447 "cntlid": 85, 00:19:37.447 "qid": 0, 00:19:37.447 "state": "enabled", 00:19:37.447 "listen_address": { 00:19:37.447 "trtype": "TCP", 00:19:37.447 "adrfam": "IPv4", 00:19:37.447 "traddr": "10.0.0.2", 00:19:37.447 "trsvcid": "4420" 00:19:37.447 }, 00:19:37.447 "peer_address": { 00:19:37.447 "trtype": "TCP", 00:19:37.447 "adrfam": "IPv4", 00:19:37.447 "traddr": "10.0.0.1", 00:19:37.447 "trsvcid": "37118" 00:19:37.447 }, 00:19:37.447 "auth": { 00:19:37.447 "state": "completed", 00:19:37.447 "digest": "sha384", 00:19:37.447 "dhgroup": "ffdhe6144" 00:19:37.447 } 00:19:37.447 } 00:19:37.447 ]' 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.447 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.717 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.717 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.717 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.717 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.717 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.717 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.659 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.230 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.230 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.230 16:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.230 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.230 { 00:19:39.230 "cntlid": 87, 00:19:39.230 "qid": 0, 00:19:39.230 "state": "enabled", 00:19:39.230 "listen_address": { 00:19:39.230 "trtype": "TCP", 00:19:39.230 "adrfam": "IPv4", 00:19:39.230 "traddr": "10.0.0.2", 00:19:39.230 "trsvcid": "4420" 00:19:39.230 }, 00:19:39.230 "peer_address": { 00:19:39.230 "trtype": "TCP", 00:19:39.230 "adrfam": "IPv4", 00:19:39.230 "traddr": "10.0.0.1", 00:19:39.230 "trsvcid": "37160" 00:19:39.230 }, 00:19:39.230 "auth": { 00:19:39.230 "state": "completed", 00:19:39.230 "digest": "sha384", 00:19:39.230 "dhgroup": "ffdhe6144" 00:19:39.230 } 00:19:39.230 } 00:19:39.230 ]' 00:19:39.230 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.230 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.230 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.491 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.491 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.491 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.491 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.491 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.491 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.430 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.002 00:19:41.002 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.002 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.002 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.263 { 00:19:41.263 "cntlid": 89, 00:19:41.263 "qid": 0, 00:19:41.263 "state": "enabled", 00:19:41.263 "listen_address": { 00:19:41.263 "trtype": "TCP", 00:19:41.263 "adrfam": "IPv4", 00:19:41.263 "traddr": "10.0.0.2", 00:19:41.263 "trsvcid": "4420" 00:19:41.263 }, 00:19:41.263 "peer_address": { 00:19:41.263 "trtype": "TCP", 00:19:41.263 "adrfam": "IPv4", 00:19:41.263 "traddr": "10.0.0.1", 00:19:41.263 "trsvcid": "37186" 00:19:41.263 }, 00:19:41.263 "auth": { 00:19:41.263 "state": "completed", 00:19:41.263 "digest": "sha384", 00:19:41.263 "dhgroup": "ffdhe8192" 00:19:41.263 } 00:19:41.263 } 00:19:41.263 ]' 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.263 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.263 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.263 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.263 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.263 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.263 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.523 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:42.467 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.467 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.467 16:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.467 16:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.467 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.039 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.039 { 00:19:43.039 "cntlid": 91, 00:19:43.039 "qid": 0, 00:19:43.039 "state": "enabled", 00:19:43.039 "listen_address": { 00:19:43.039 "trtype": "TCP", 00:19:43.039 "adrfam": "IPv4", 00:19:43.039 "traddr": "10.0.0.2", 00:19:43.039 "trsvcid": "4420" 00:19:43.039 }, 00:19:43.039 "peer_address": { 00:19:43.039 "trtype": "TCP", 00:19:43.039 "adrfam": "IPv4", 00:19:43.039 "traddr": "10.0.0.1", 00:19:43.039 "trsvcid": "37224" 00:19:43.039 }, 00:19:43.039 "auth": { 00:19:43.039 "state": "completed", 00:19:43.039 "digest": "sha384", 00:19:43.039 "dhgroup": "ffdhe8192" 00:19:43.039 } 00:19:43.039 } 00:19:43.039 ]' 00:19:43.039 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.299 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.299 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.299 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.299 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.299 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.299 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.299 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.560 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:44.132 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.132 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.132 16:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.132 16:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.132 16:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.132 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.133 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.133 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.393 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.966 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.966 { 00:19:44.966 "cntlid": 93, 00:19:44.966 "qid": 0, 00:19:44.966 "state": "enabled", 00:19:44.966 "listen_address": { 00:19:44.966 "trtype": "TCP", 00:19:44.966 "adrfam": "IPv4", 00:19:44.966 "traddr": "10.0.0.2", 00:19:44.966 "trsvcid": "4420" 00:19:44.966 }, 00:19:44.966 "peer_address": { 00:19:44.966 "trtype": "TCP", 00:19:44.966 "adrfam": "IPv4", 00:19:44.966 "traddr": "10.0.0.1", 00:19:44.966 "trsvcid": "37240" 00:19:44.966 }, 00:19:44.966 "auth": { 00:19:44.966 "state": "completed", 00:19:44.966 "digest": "sha384", 00:19:44.966 "dhgroup": "ffdhe8192" 00:19:44.966 } 00:19:44.966 } 00:19:44.966 ]' 00:19:44.966 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.227 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.227 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.178 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.747 00:19:46.747 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.747 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.747 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.007 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.007 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.007 16:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.007 16:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.008 { 00:19:47.008 "cntlid": 95, 00:19:47.008 "qid": 0, 00:19:47.008 "state": "enabled", 00:19:47.008 "listen_address": { 00:19:47.008 "trtype": "TCP", 00:19:47.008 "adrfam": "IPv4", 00:19:47.008 "traddr": "10.0.0.2", 00:19:47.008 "trsvcid": "4420" 00:19:47.008 }, 00:19:47.008 "peer_address": { 00:19:47.008 "trtype": "TCP", 00:19:47.008 "adrfam": "IPv4", 00:19:47.008 "traddr": "10.0.0.1", 00:19:47.008 "trsvcid": "59894" 00:19:47.008 }, 00:19:47.008 "auth": { 00:19:47.008 "state": "completed", 00:19:47.008 "digest": "sha384", 00:19:47.008 "dhgroup": "ffdhe8192" 00:19:47.008 } 00:19:47.008 } 00:19:47.008 ]' 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.008 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.311 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:47.895 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.895 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.895 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.895 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.156 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.416 00:19:48.416 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.416 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.416 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.676 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.677 { 00:19:48.677 "cntlid": 97, 00:19:48.677 "qid": 0, 00:19:48.677 "state": "enabled", 00:19:48.677 "listen_address": { 00:19:48.677 "trtype": "TCP", 00:19:48.677 "adrfam": "IPv4", 00:19:48.677 "traddr": "10.0.0.2", 00:19:48.677 "trsvcid": "4420" 00:19:48.677 }, 00:19:48.677 "peer_address": { 00:19:48.677 "trtype": "TCP", 00:19:48.677 "adrfam": "IPv4", 00:19:48.677 "traddr": "10.0.0.1", 00:19:48.677 "trsvcid": "59924" 00:19:48.677 }, 00:19:48.677 "auth": { 00:19:48.677 "state": "completed", 00:19:48.677 "digest": "sha512", 00:19:48.677 "dhgroup": "null" 00:19:48.677 } 00:19:48.677 } 00:19:48.677 ]' 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.677 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.937 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.879 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.140 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.140 { 00:19:50.140 "cntlid": 99, 00:19:50.140 "qid": 0, 00:19:50.140 "state": "enabled", 00:19:50.140 "listen_address": { 00:19:50.140 "trtype": "TCP", 00:19:50.140 "adrfam": "IPv4", 00:19:50.140 "traddr": "10.0.0.2", 00:19:50.140 "trsvcid": "4420" 00:19:50.140 }, 00:19:50.140 "peer_address": { 00:19:50.140 "trtype": "TCP", 00:19:50.140 "adrfam": "IPv4", 00:19:50.140 "traddr": "10.0.0.1", 00:19:50.140 "trsvcid": "59954" 00:19:50.140 }, 00:19:50.140 "auth": { 00:19:50.140 "state": "completed", 00:19:50.140 "digest": "sha512", 00:19:50.140 "dhgroup": "null" 00:19:50.140 } 00:19:50.140 } 00:19:50.140 ]' 00:19:50.140 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.400 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.400 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.400 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:50.400 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.400 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.400 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.400 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.400 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:51.341 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.341 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.601 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.601 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.601 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.601 00:19:51.601 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.601 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.601 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.862 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.862 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.862 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.862 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.862 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.862 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.862 { 00:19:51.862 "cntlid": 101, 00:19:51.862 "qid": 0, 00:19:51.862 "state": "enabled", 00:19:51.862 "listen_address": { 00:19:51.862 "trtype": "TCP", 00:19:51.862 "adrfam": "IPv4", 00:19:51.862 "traddr": "10.0.0.2", 00:19:51.862 "trsvcid": "4420" 00:19:51.862 }, 00:19:51.862 "peer_address": { 00:19:51.862 "trtype": "TCP", 00:19:51.862 "adrfam": "IPv4", 00:19:51.863 "traddr": "10.0.0.1", 00:19:51.863 "trsvcid": "59972" 00:19:51.863 }, 00:19:51.863 "auth": { 00:19:51.863 "state": "completed", 00:19:51.863 "digest": "sha512", 00:19:51.863 "dhgroup": "null" 00:19:51.863 } 00:19:51.863 } 00:19:51.863 ]' 00:19:51.863 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.863 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.863 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.863 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.863 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.123 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.123 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.123 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.123 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.065 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.325 00:19:53.325 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.325 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.326 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.586 { 00:19:53.586 "cntlid": 103, 00:19:53.586 "qid": 0, 00:19:53.586 "state": "enabled", 00:19:53.586 "listen_address": { 00:19:53.586 "trtype": "TCP", 00:19:53.586 "adrfam": "IPv4", 00:19:53.586 "traddr": "10.0.0.2", 00:19:53.586 "trsvcid": "4420" 00:19:53.586 }, 00:19:53.586 "peer_address": { 00:19:53.586 "trtype": "TCP", 00:19:53.586 "adrfam": "IPv4", 00:19:53.586 "traddr": "10.0.0.1", 00:19:53.586 "trsvcid": "60010" 00:19:53.586 }, 00:19:53.586 "auth": { 00:19:53.586 "state": "completed", 00:19:53.586 "digest": "sha512", 00:19:53.586 "dhgroup": "null" 00:19:53.586 } 00:19:53.586 } 00:19:53.586 ]' 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.586 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.847 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:19:54.417 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.678 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.939 00:19:54.939 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.939 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.939 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.200 { 00:19:55.200 "cntlid": 105, 00:19:55.200 "qid": 0, 00:19:55.200 "state": "enabled", 00:19:55.200 "listen_address": { 00:19:55.200 "trtype": "TCP", 00:19:55.200 "adrfam": "IPv4", 00:19:55.200 "traddr": "10.0.0.2", 00:19:55.200 "trsvcid": "4420" 00:19:55.200 }, 00:19:55.200 "peer_address": { 00:19:55.200 "trtype": "TCP", 00:19:55.200 "adrfam": "IPv4", 00:19:55.200 "traddr": "10.0.0.1", 00:19:55.200 "trsvcid": "60024" 00:19:55.200 }, 00:19:55.200 "auth": { 00:19:55.200 "state": "completed", 00:19:55.200 "digest": "sha512", 00:19:55.200 "dhgroup": "ffdhe2048" 00:19:55.200 } 00:19:55.200 } 00:19:55.200 ]' 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.200 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.461 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:19:56.031 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.291 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.291 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.552 00:19:56.552 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.552 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.552 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.813 { 00:19:56.813 "cntlid": 107, 00:19:56.813 "qid": 0, 00:19:56.813 "state": "enabled", 00:19:56.813 "listen_address": { 00:19:56.813 "trtype": "TCP", 00:19:56.813 "adrfam": "IPv4", 00:19:56.813 "traddr": "10.0.0.2", 00:19:56.813 "trsvcid": "4420" 00:19:56.813 }, 00:19:56.813 "peer_address": { 00:19:56.813 "trtype": "TCP", 00:19:56.813 "adrfam": "IPv4", 00:19:56.813 "traddr": "10.0.0.1", 00:19:56.813 "trsvcid": "40400" 00:19:56.813 }, 00:19:56.813 "auth": { 00:19:56.813 "state": "completed", 00:19:56.813 "digest": "sha512", 00:19:56.813 "dhgroup": "ffdhe2048" 00:19:56.813 } 00:19:56.813 } 00:19:56.813 ]' 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.813 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.078 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:58.017 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.018 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.278 00:19:58.278 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.278 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.278 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.278 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.278 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.278 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.278 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.538 { 00:19:58.538 "cntlid": 109, 00:19:58.538 "qid": 0, 00:19:58.538 "state": "enabled", 00:19:58.538 "listen_address": { 00:19:58.538 "trtype": "TCP", 00:19:58.538 "adrfam": "IPv4", 00:19:58.538 "traddr": "10.0.0.2", 00:19:58.538 "trsvcid": "4420" 00:19:58.538 }, 00:19:58.538 "peer_address": { 00:19:58.538 "trtype": "TCP", 00:19:58.538 "adrfam": "IPv4", 00:19:58.538 "traddr": "10.0.0.1", 00:19:58.538 "trsvcid": "40428" 00:19:58.538 }, 00:19:58.538 "auth": { 00:19:58.538 "state": "completed", 00:19:58.538 "digest": "sha512", 00:19:58.538 "dhgroup": "ffdhe2048" 00:19:58.538 } 00:19:58.538 } 00:19:58.538 ]' 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.538 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.539 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.539 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.799 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.370 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.631 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.892 00:19:59.892 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.892 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.892 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.153 { 00:20:00.153 "cntlid": 111, 00:20:00.153 "qid": 0, 00:20:00.153 "state": "enabled", 00:20:00.153 "listen_address": { 00:20:00.153 "trtype": "TCP", 00:20:00.153 "adrfam": "IPv4", 00:20:00.153 "traddr": "10.0.0.2", 00:20:00.153 "trsvcid": "4420" 00:20:00.153 }, 00:20:00.153 "peer_address": { 00:20:00.153 "trtype": "TCP", 00:20:00.153 "adrfam": "IPv4", 00:20:00.153 "traddr": "10.0.0.1", 00:20:00.153 "trsvcid": "40460" 00:20:00.153 }, 00:20:00.153 "auth": { 00:20:00.153 "state": "completed", 00:20:00.153 "digest": "sha512", 00:20:00.153 "dhgroup": "ffdhe2048" 00:20:00.153 } 00:20:00.153 } 00:20:00.153 ]' 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.153 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.414 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:20:00.986 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.986 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.986 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.986 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.247 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.247 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.247 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.247 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.507 00:20:01.507 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.507 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.507 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.769 { 00:20:01.769 "cntlid": 113, 00:20:01.769 "qid": 0, 00:20:01.769 "state": "enabled", 00:20:01.769 "listen_address": { 00:20:01.769 "trtype": "TCP", 00:20:01.769 "adrfam": "IPv4", 00:20:01.769 "traddr": "10.0.0.2", 00:20:01.769 "trsvcid": "4420" 00:20:01.769 }, 00:20:01.769 "peer_address": { 00:20:01.769 "trtype": "TCP", 00:20:01.769 "adrfam": "IPv4", 00:20:01.769 "traddr": "10.0.0.1", 00:20:01.769 "trsvcid": "40472" 00:20:01.769 }, 00:20:01.769 "auth": { 00:20:01.769 "state": "completed", 00:20:01.769 "digest": "sha512", 00:20:01.769 "dhgroup": "ffdhe3072" 00:20:01.769 } 00:20:01.769 } 00:20:01.769 ]' 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.769 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.032 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:20:02.603 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.864 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.125 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.125 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.125 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.125 00:20:03.125 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.125 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.125 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.386 { 00:20:03.386 "cntlid": 115, 00:20:03.386 "qid": 0, 00:20:03.386 "state": "enabled", 00:20:03.386 "listen_address": { 00:20:03.386 "trtype": "TCP", 00:20:03.386 "adrfam": "IPv4", 00:20:03.386 "traddr": "10.0.0.2", 00:20:03.386 "trsvcid": "4420" 00:20:03.386 }, 00:20:03.386 "peer_address": { 00:20:03.386 "trtype": "TCP", 00:20:03.386 "adrfam": "IPv4", 00:20:03.386 "traddr": "10.0.0.1", 00:20:03.386 "trsvcid": "40508" 00:20:03.386 }, 00:20:03.386 "auth": { 00:20:03.386 "state": "completed", 00:20:03.386 "digest": "sha512", 00:20:03.386 "dhgroup": "ffdhe3072" 00:20:03.386 } 00:20:03.386 } 00:20:03.386 ]' 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.386 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.646 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.646 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.646 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.646 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.587 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.588 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.588 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.588 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.588 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.588 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.588 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.848 00:20:04.848 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.848 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.848 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.109 { 00:20:05.109 "cntlid": 117, 00:20:05.109 "qid": 0, 00:20:05.109 "state": "enabled", 00:20:05.109 "listen_address": { 00:20:05.109 "trtype": "TCP", 00:20:05.109 "adrfam": "IPv4", 00:20:05.109 "traddr": "10.0.0.2", 00:20:05.109 "trsvcid": "4420" 00:20:05.109 }, 00:20:05.109 "peer_address": { 00:20:05.109 "trtype": "TCP", 00:20:05.109 "adrfam": "IPv4", 00:20:05.109 "traddr": "10.0.0.1", 00:20:05.109 "trsvcid": "40526" 00:20:05.109 }, 00:20:05.109 "auth": { 00:20:05.109 "state": "completed", 00:20:05.109 "digest": "sha512", 00:20:05.109 "dhgroup": "ffdhe3072" 00:20:05.109 } 00:20:05.109 } 00:20:05.109 ]' 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.109 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.369 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:20:05.939 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.199 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.199 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.199 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.199 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.199 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.199 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.200 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.459 00:20:06.459 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.459 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.459 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.718 { 00:20:06.718 "cntlid": 119, 00:20:06.718 "qid": 0, 00:20:06.718 "state": "enabled", 00:20:06.718 "listen_address": { 00:20:06.718 "trtype": "TCP", 00:20:06.718 "adrfam": "IPv4", 00:20:06.718 "traddr": "10.0.0.2", 00:20:06.718 "trsvcid": "4420" 00:20:06.718 }, 00:20:06.718 "peer_address": { 00:20:06.718 "trtype": "TCP", 00:20:06.718 "adrfam": "IPv4", 00:20:06.718 "traddr": "10.0.0.1", 00:20:06.718 "trsvcid": "50292" 00:20:06.718 }, 00:20:06.718 "auth": { 00:20:06.718 "state": "completed", 00:20:06.718 "digest": "sha512", 00:20:06.718 "dhgroup": "ffdhe3072" 00:20:06.718 } 00:20:06.718 } 00:20:06.718 ]' 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.718 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.977 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.917 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.178 00:20:08.178 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.178 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.178 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.439 { 00:20:08.439 "cntlid": 121, 00:20:08.439 "qid": 0, 00:20:08.439 "state": "enabled", 00:20:08.439 "listen_address": { 00:20:08.439 "trtype": "TCP", 00:20:08.439 "adrfam": "IPv4", 00:20:08.439 "traddr": "10.0.0.2", 00:20:08.439 "trsvcid": "4420" 00:20:08.439 }, 00:20:08.439 "peer_address": { 00:20:08.439 "trtype": "TCP", 00:20:08.439 "adrfam": "IPv4", 00:20:08.439 "traddr": "10.0.0.1", 00:20:08.439 "trsvcid": "50318" 00:20:08.439 }, 00:20:08.439 "auth": { 00:20:08.439 "state": "completed", 00:20:08.439 "digest": "sha512", 00:20:08.439 "dhgroup": "ffdhe4096" 00:20:08.439 } 00:20:08.439 } 00:20:08.439 ]' 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.439 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.699 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.639 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.640 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.899 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.899 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.160 { 00:20:10.160 "cntlid": 123, 00:20:10.160 "qid": 0, 00:20:10.160 "state": "enabled", 00:20:10.160 "listen_address": { 00:20:10.160 "trtype": "TCP", 00:20:10.160 "adrfam": "IPv4", 00:20:10.160 "traddr": "10.0.0.2", 00:20:10.160 "trsvcid": "4420" 00:20:10.160 }, 00:20:10.160 "peer_address": { 00:20:10.160 "trtype": "TCP", 00:20:10.160 "adrfam": "IPv4", 00:20:10.160 "traddr": "10.0.0.1", 00:20:10.160 "trsvcid": "50350" 00:20:10.160 }, 00:20:10.160 "auth": { 00:20:10.160 "state": "completed", 00:20:10.160 "digest": "sha512", 00:20:10.160 "dhgroup": "ffdhe4096" 00:20:10.160 } 00:20:10.160 } 00:20:10.160 ]' 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.160 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.421 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:10.993 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.253 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.514 00:20:11.514 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.514 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.514 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.774 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.774 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.774 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.774 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.774 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.774 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.774 { 00:20:11.774 "cntlid": 125, 00:20:11.774 "qid": 0, 00:20:11.774 "state": "enabled", 00:20:11.775 "listen_address": { 00:20:11.775 "trtype": "TCP", 00:20:11.775 "adrfam": "IPv4", 00:20:11.775 "traddr": "10.0.0.2", 00:20:11.775 "trsvcid": "4420" 00:20:11.775 }, 00:20:11.775 "peer_address": { 00:20:11.775 "trtype": "TCP", 00:20:11.775 "adrfam": "IPv4", 00:20:11.775 "traddr": "10.0.0.1", 00:20:11.775 "trsvcid": "50380" 00:20:11.775 }, 00:20:11.775 "auth": { 00:20:11.775 "state": "completed", 00:20:11.775 "digest": "sha512", 00:20:11.775 "dhgroup": "ffdhe4096" 00:20:11.775 } 00:20:11.775 } 00:20:11.775 ]' 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.775 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.035 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.977 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.239 00:20:13.239 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.239 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.239 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.500 { 00:20:13.500 "cntlid": 127, 00:20:13.500 "qid": 0, 00:20:13.500 "state": "enabled", 00:20:13.500 "listen_address": { 00:20:13.500 "trtype": "TCP", 00:20:13.500 "adrfam": "IPv4", 00:20:13.500 "traddr": "10.0.0.2", 00:20:13.500 "trsvcid": "4420" 00:20:13.500 }, 00:20:13.500 "peer_address": { 00:20:13.500 "trtype": "TCP", 00:20:13.500 "adrfam": "IPv4", 00:20:13.500 "traddr": "10.0.0.1", 00:20:13.500 "trsvcid": "50404" 00:20:13.500 }, 00:20:13.500 "auth": { 00:20:13.500 "state": "completed", 00:20:13.500 "digest": "sha512", 00:20:13.500 "dhgroup": "ffdhe4096" 00:20:13.500 } 00:20:13.500 } 00:20:13.500 ]' 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.761 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.350 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.611 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.872 00:20:14.872 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.872 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.872 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.133 { 00:20:15.133 "cntlid": 129, 00:20:15.133 "qid": 0, 00:20:15.133 "state": "enabled", 00:20:15.133 "listen_address": { 00:20:15.133 "trtype": "TCP", 00:20:15.133 "adrfam": "IPv4", 00:20:15.133 "traddr": "10.0.0.2", 00:20:15.133 "trsvcid": "4420" 00:20:15.133 }, 00:20:15.133 "peer_address": { 00:20:15.133 "trtype": "TCP", 00:20:15.133 "adrfam": "IPv4", 00:20:15.133 "traddr": "10.0.0.1", 00:20:15.133 "trsvcid": "50432" 00:20:15.133 }, 00:20:15.133 "auth": { 00:20:15.133 "state": "completed", 00:20:15.133 "digest": "sha512", 00:20:15.133 "dhgroup": "ffdhe6144" 00:20:15.133 } 00:20:15.133 } 00:20:15.133 ]' 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.133 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.394 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.394 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.394 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.394 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.337 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.337 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.611 00:20:16.611 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.611 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.611 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.909 { 00:20:16.909 "cntlid": 131, 00:20:16.909 "qid": 0, 00:20:16.909 "state": "enabled", 00:20:16.909 "listen_address": { 00:20:16.909 "trtype": "TCP", 00:20:16.909 "adrfam": "IPv4", 00:20:16.909 "traddr": "10.0.0.2", 00:20:16.909 "trsvcid": "4420" 00:20:16.909 }, 00:20:16.909 "peer_address": { 00:20:16.909 "trtype": "TCP", 00:20:16.909 "adrfam": "IPv4", 00:20:16.909 "traddr": "10.0.0.1", 00:20:16.909 "trsvcid": "52856" 00:20:16.909 }, 00:20:16.909 "auth": { 00:20:16.909 "state": "completed", 00:20:16.909 "digest": "sha512", 00:20:16.909 "dhgroup": "ffdhe6144" 00:20:16.909 } 00:20:16.909 } 00:20:16.909 ]' 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.909 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.174 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.174 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.174 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.174 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.116 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.117 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.377 00:20:18.377 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.377 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.377 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.638 { 00:20:18.638 "cntlid": 133, 00:20:18.638 "qid": 0, 00:20:18.638 "state": "enabled", 00:20:18.638 "listen_address": { 00:20:18.638 "trtype": "TCP", 00:20:18.638 "adrfam": "IPv4", 00:20:18.638 "traddr": "10.0.0.2", 00:20:18.638 "trsvcid": "4420" 00:20:18.638 }, 00:20:18.638 "peer_address": { 00:20:18.638 "trtype": "TCP", 00:20:18.638 "adrfam": "IPv4", 00:20:18.638 "traddr": "10.0.0.1", 00:20:18.638 "trsvcid": "52884" 00:20:18.638 }, 00:20:18.638 "auth": { 00:20:18.638 "state": "completed", 00:20:18.638 "digest": "sha512", 00:20:18.638 "dhgroup": "ffdhe6144" 00:20:18.638 } 00:20:18.638 } 00:20:18.638 ]' 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.638 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.899 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.899 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.899 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.899 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.841 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.441 00:20:20.441 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.441 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.441 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.441 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.441 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.441 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.441 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.441 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.441 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.441 { 00:20:20.441 "cntlid": 135, 00:20:20.441 "qid": 0, 00:20:20.441 "state": "enabled", 00:20:20.441 "listen_address": { 00:20:20.441 "trtype": "TCP", 00:20:20.441 "adrfam": "IPv4", 00:20:20.441 "traddr": "10.0.0.2", 00:20:20.441 "trsvcid": "4420" 00:20:20.441 }, 00:20:20.441 "peer_address": { 00:20:20.441 "trtype": "TCP", 00:20:20.441 "adrfam": "IPv4", 00:20:20.441 "traddr": "10.0.0.1", 00:20:20.441 "trsvcid": "52908" 00:20:20.441 }, 00:20:20.442 "auth": { 00:20:20.442 "state": "completed", 00:20:20.442 "digest": "sha512", 00:20:20.442 "dhgroup": "ffdhe6144" 00:20:20.442 } 00:20:20.442 } 00:20:20.442 ]' 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.442 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.702 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:20:21.274 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.535 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.106 00:20:22.106 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.106 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.106 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.366 { 00:20:22.366 "cntlid": 137, 00:20:22.366 "qid": 0, 00:20:22.366 "state": "enabled", 00:20:22.366 "listen_address": { 00:20:22.366 "trtype": "TCP", 00:20:22.366 "adrfam": "IPv4", 00:20:22.366 "traddr": "10.0.0.2", 00:20:22.366 "trsvcid": "4420" 00:20:22.366 }, 00:20:22.366 "peer_address": { 00:20:22.366 "trtype": "TCP", 00:20:22.366 "adrfam": "IPv4", 00:20:22.366 "traddr": "10.0.0.1", 00:20:22.366 "trsvcid": "52928" 00:20:22.366 }, 00:20:22.366 "auth": { 00:20:22.366 "state": "completed", 00:20:22.366 "digest": "sha512", 00:20:22.366 "dhgroup": "ffdhe8192" 00:20:22.366 } 00:20:22.366 } 00:20:22.366 ]' 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.366 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.628 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:20:23.200 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.461 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.033 00:20:24.033 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.033 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.033 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.294 { 00:20:24.294 "cntlid": 139, 00:20:24.294 "qid": 0, 00:20:24.294 "state": "enabled", 00:20:24.294 "listen_address": { 00:20:24.294 "trtype": "TCP", 00:20:24.294 "adrfam": "IPv4", 00:20:24.294 "traddr": "10.0.0.2", 00:20:24.294 "trsvcid": "4420" 00:20:24.294 }, 00:20:24.294 "peer_address": { 00:20:24.294 "trtype": "TCP", 00:20:24.294 "adrfam": "IPv4", 00:20:24.294 "traddr": "10.0.0.1", 00:20:24.294 "trsvcid": "52946" 00:20:24.294 }, 00:20:24.294 "auth": { 00:20:24.294 "state": "completed", 00:20:24.294 "digest": "sha512", 00:20:24.294 "dhgroup": "ffdhe8192" 00:20:24.294 } 00:20:24.294 } 00:20:24.294 ]' 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.294 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.294 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.294 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.294 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.294 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.294 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.555 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:MTcwNmViMWRkN2Y0YjlhNjNkMTM4YzcxZWZlMWI2Nzn6I0Bv: --dhchap-ctrl-secret DHHC-1:02:YzIwYmI0NjFjZDQ0ZjE2M2MyZjdlMjYyZTY5ODMxY2QyYmQ2NzhjMTM1Y2I1Mjg1Hz6qOA==: 00:20:25.126 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.387 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.387 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.387 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.387 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.959 00:20:25.959 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.959 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.959 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.220 { 00:20:26.220 "cntlid": 141, 00:20:26.220 "qid": 0, 00:20:26.220 "state": "enabled", 00:20:26.220 "listen_address": { 00:20:26.220 "trtype": "TCP", 00:20:26.220 "adrfam": "IPv4", 00:20:26.220 "traddr": "10.0.0.2", 00:20:26.220 "trsvcid": "4420" 00:20:26.220 }, 00:20:26.220 "peer_address": { 00:20:26.220 "trtype": "TCP", 00:20:26.220 "adrfam": "IPv4", 00:20:26.220 "traddr": "10.0.0.1", 00:20:26.220 "trsvcid": "52972" 00:20:26.220 }, 00:20:26.220 "auth": { 00:20:26.220 "state": "completed", 00:20:26.220 "digest": "sha512", 00:20:26.220 "dhgroup": "ffdhe8192" 00:20:26.220 } 00:20:26.220 } 00:20:26.220 ]' 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.220 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.220 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.220 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.220 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.481 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YWU2YWVkZGMwODI2MmU1YTFmNmYwYzAzMzg5YmJlOWY0NTgzNTkxYTkwOTg4NDEzo6OqJg==: --dhchap-ctrl-secret DHHC-1:01:ODNhNzljM2E2ZWZhNTY0MjBmODZkOTY2ODUxMDE0MjE3bkvD: 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.052 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.313 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.884 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.884 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.884 { 00:20:27.884 "cntlid": 143, 00:20:27.884 "qid": 0, 00:20:27.884 "state": "enabled", 00:20:27.884 "listen_address": { 00:20:27.884 "trtype": "TCP", 00:20:27.885 "adrfam": "IPv4", 00:20:27.885 "traddr": "10.0.0.2", 00:20:27.885 "trsvcid": "4420" 00:20:27.885 }, 00:20:27.885 "peer_address": { 00:20:27.885 "trtype": "TCP", 00:20:27.885 "adrfam": "IPv4", 00:20:27.885 "traddr": "10.0.0.1", 00:20:27.885 "trsvcid": "33096" 00:20:27.885 }, 00:20:27.885 "auth": { 00:20:27.885 "state": "completed", 00:20:27.885 "digest": "sha512", 00:20:27.885 "dhgroup": "ffdhe8192" 00:20:27.885 } 00:20:27.885 } 00:20:27.885 ]' 00:20:27.885 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.145 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.406 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:28.978 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.239 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.810 00:20:29.810 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.810 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.810 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.811 { 00:20:29.811 "cntlid": 145, 00:20:29.811 "qid": 0, 00:20:29.811 "state": "enabled", 00:20:29.811 "listen_address": { 00:20:29.811 "trtype": "TCP", 00:20:29.811 "adrfam": "IPv4", 00:20:29.811 "traddr": "10.0.0.2", 00:20:29.811 "trsvcid": "4420" 00:20:29.811 }, 00:20:29.811 "peer_address": { 00:20:29.811 "trtype": "TCP", 00:20:29.811 "adrfam": "IPv4", 00:20:29.811 "traddr": "10.0.0.1", 00:20:29.811 "trsvcid": "33128" 00:20:29.811 }, 00:20:29.811 "auth": { 00:20:29.811 "state": "completed", 00:20:29.811 "digest": "sha512", 00:20:29.811 "dhgroup": "ffdhe8192" 00:20:29.811 } 00:20:29.811 } 00:20:29.811 ]' 00:20:29.811 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.071 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.332 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YWU0YTBiZThiMmUxZGZmODhjMDczNjgyM2Y2Y2IyNmJkZTBkNmI1YjM2MjQyZjBkgXnm2Q==: --dhchap-ctrl-secret DHHC-1:03:OTdhMTRiNWUwZjQwM2M1MmE5ZTI5NjhkNDg1ZDM3OTc1MjA3MTI3ZDBjMmNjOWVmN2ZmNzJjMGFjMDYxNGVlZI1Bu8c=: 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:30.903 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:31.475 request: 00:20:31.475 { 00:20:31.475 "name": "nvme0", 00:20:31.475 "trtype": "tcp", 00:20:31.475 "traddr": "10.0.0.2", 00:20:31.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.475 "adrfam": "ipv4", 00:20:31.475 "trsvcid": "4420", 00:20:31.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:31.475 "dhchap_key": "key2", 00:20:31.475 "method": "bdev_nvme_attach_controller", 00:20:31.475 "req_id": 1 00:20:31.475 } 00:20:31.475 Got JSON-RPC error response 00:20:31.475 response: 00:20:31.475 { 00:20:31.475 "code": -5, 00:20:31.475 "message": "Input/output error" 00:20:31.475 } 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.475 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:32.087 request: 00:20:32.087 { 00:20:32.087 "name": "nvme0", 00:20:32.087 "trtype": "tcp", 00:20:32.087 "traddr": "10.0.0.2", 00:20:32.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.087 "adrfam": "ipv4", 00:20:32.087 "trsvcid": "4420", 00:20:32.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.087 "dhchap_key": "key1", 00:20:32.087 "dhchap_ctrlr_key": "ckey2", 00:20:32.087 "method": "bdev_nvme_attach_controller", 00:20:32.087 "req_id": 1 00:20:32.087 } 00:20:32.087 Got JSON-RPC error response 00:20:32.087 response: 00:20:32.087 { 00:20:32.087 "code": -5, 00:20:32.087 "message": "Input/output error" 00:20:32.087 } 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.087 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.348 request: 00:20:32.348 { 00:20:32.348 "name": "nvme0", 00:20:32.348 "trtype": "tcp", 00:20:32.348 "traddr": "10.0.0.2", 00:20:32.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.348 "adrfam": "ipv4", 00:20:32.348 "trsvcid": "4420", 00:20:32.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.348 "dhchap_key": "key1", 00:20:32.348 "dhchap_ctrlr_key": "ckey1", 00:20:32.348 "method": "bdev_nvme_attach_controller", 00:20:32.348 "req_id": 1 00:20:32.348 } 00:20:32.348 Got JSON-RPC error response 00:20:32.348 response: 00:20:32.348 { 00:20:32.348 "code": -5, 00:20:32.348 "message": "Input/output error" 00:20:32.348 } 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3093840 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3093840 ']' 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3093840 00:20:32.348 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3093840 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3093840' 00:20:32.609 killing process with pid 3093840 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3093840 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3093840 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3120473 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3120473 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3120473 ']' 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:32.609 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3120473 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3120473 ']' 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.552 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.813 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.384 00:20:34.384 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.385 { 00:20:34.385 "cntlid": 1, 00:20:34.385 "qid": 0, 00:20:34.385 "state": "enabled", 00:20:34.385 "listen_address": { 00:20:34.385 "trtype": "TCP", 00:20:34.385 "adrfam": "IPv4", 00:20:34.385 "traddr": "10.0.0.2", 00:20:34.385 "trsvcid": "4420" 00:20:34.385 }, 00:20:34.385 "peer_address": { 00:20:34.385 "trtype": "TCP", 00:20:34.385 "adrfam": "IPv4", 00:20:34.385 "traddr": "10.0.0.1", 00:20:34.385 "trsvcid": "33192" 00:20:34.385 }, 00:20:34.385 "auth": { 00:20:34.385 "state": "completed", 00:20:34.385 "digest": "sha512", 00:20:34.385 "dhgroup": "ffdhe8192" 00:20:34.385 } 00:20:34.385 } 00:20:34.385 ]' 00:20:34.385 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.645 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.905 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzVmNDBlOWI5YmVmNjEzMzY3OWMwODRmYTM1Y2U4OTYzZjExZDhmZTBmOWYwNzdkZjQyZDVjYTE4ZDgwZTg4MdbDEV4=: 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.474 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:35.475 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:35.735 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:35.736 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.736 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.996 request: 00:20:35.996 { 00:20:35.996 "name": "nvme0", 00:20:35.996 "trtype": "tcp", 00:20:35.996 "traddr": "10.0.0.2", 00:20:35.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:35.996 "adrfam": "ipv4", 00:20:35.996 "trsvcid": "4420", 00:20:35.996 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:35.996 "dhchap_key": "key3", 00:20:35.996 "method": "bdev_nvme_attach_controller", 00:20:35.996 "req_id": 1 00:20:35.996 } 00:20:35.996 Got JSON-RPC error response 00:20:35.996 response: 00:20:35.996 { 00:20:35.996 "code": -5, 00:20:35.996 "message": "Input/output error" 00:20:35.996 } 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:35.996 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.997 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.257 request: 00:20:36.258 { 00:20:36.258 "name": "nvme0", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.258 "dhchap_key": "key3", 00:20:36.258 "method": "bdev_nvme_attach_controller", 00:20:36.258 "req_id": 1 00:20:36.258 } 00:20:36.258 Got JSON-RPC error response 00:20:36.258 response: 00:20:36.258 { 00:20:36.258 "code": -5, 00:20:36.258 "message": "Input/output error" 00:20:36.258 } 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:36.258 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:36.519 request: 00:20:36.519 { 00:20:36.519 "name": "nvme0", 00:20:36.519 "trtype": "tcp", 00:20:36.519 "traddr": "10.0.0.2", 00:20:36.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.519 "adrfam": "ipv4", 00:20:36.519 "trsvcid": "4420", 00:20:36.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.519 "dhchap_key": "key0", 00:20:36.519 "dhchap_ctrlr_key": "key1", 00:20:36.519 "method": "bdev_nvme_attach_controller", 00:20:36.519 "req_id": 1 00:20:36.519 } 00:20:36.519 Got JSON-RPC error response 00:20:36.519 response: 00:20:36.519 { 00:20:36.519 "code": -5, 00:20:36.519 "message": "Input/output error" 00:20:36.519 } 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:36.519 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:36.781 00:20:36.781 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:36.781 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:36.781 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3093868 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3093868 ']' 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3093868 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:37.041 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3093868 00:20:37.303 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:37.303 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:37.303 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3093868' 00:20:37.303 killing process with pid 3093868 00:20:37.303 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3093868 00:20:37.303 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3093868 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.303 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:37.303 rmmod nvme_tcp 00:20:37.303 rmmod nvme_fabrics 00:20:37.564 rmmod nvme_keyring 00:20:37.564 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.564 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:37.564 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:37.564 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3120473 ']' 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3120473 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3120473 ']' 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3120473 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3120473 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3120473' 00:20:37.565 killing process with pid 3120473 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3120473 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3120473 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.565 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.113 16:29:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:40.113 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CPH /tmp/spdk.key-sha256.aHV /tmp/spdk.key-sha384.BQu /tmp/spdk.key-sha512.TgY /tmp/spdk.key-sha512.2Hs /tmp/spdk.key-sha384.iHt /tmp/spdk.key-sha256.B6U '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:40.113 00:20:40.113 real 2m23.386s 00:20:40.113 user 5m19.343s 00:20:40.113 sys 0m20.968s 00:20:40.113 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:40.113 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.113 ************************************ 00:20:40.113 END TEST nvmf_auth_target 00:20:40.113 ************************************ 00:20:40.113 16:29:06 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:40.113 16:29:06 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:40.113 16:29:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:20:40.113 16:29:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:40.113 16:29:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:40.113 ************************************ 00:20:40.113 START TEST nvmf_bdevio_no_huge 00:20:40.113 ************************************ 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:40.113 * Looking for test storage... 00:20:40.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.113 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:40.114 16:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:46.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:46.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:46.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:46.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:20:46.702 00:20:46.702 --- 10.0.0.2 ping statistics --- 00:20:46.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.702 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:20:46.702 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:20:46.703 00:20:46.703 --- 10.0.0.1 ping statistics --- 00:20:46.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.703 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3125522 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3125522 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 3125522 ']' 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:46.703 16:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.703 [2024-06-07 16:29:13.512286] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:20:46.703 [2024-06-07 16:29:13.512368] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:46.964 [2024-06-07 16:29:13.598599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.964 [2024-06-07 16:29:13.692205] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.964 [2024-06-07 16:29:13.692243] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.964 [2024-06-07 16:29:13.692251] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.964 [2024-06-07 16:29:13.692258] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.964 [2024-06-07 16:29:13.692264] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.964 [2024-06-07 16:29:13.692420] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.964 [2024-06-07 16:29:13.692528] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:20:46.964 [2024-06-07 16:29:13.692762] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:20:46.964 [2024-06-07 16:29:13.692763] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.537 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.798 [2024-06-07 16:29:14.390864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.798 Malloc0 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:47.798 [2024-06-07 16:29:14.444633] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.798 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.798 { 00:20:47.798 "params": { 00:20:47.798 "name": "Nvme$subsystem", 00:20:47.798 "trtype": "$TEST_TRANSPORT", 00:20:47.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.798 "adrfam": "ipv4", 00:20:47.799 "trsvcid": "$NVMF_PORT", 00:20:47.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.799 "hdgst": ${hdgst:-false}, 00:20:47.799 "ddgst": ${ddgst:-false} 00:20:47.799 }, 00:20:47.799 "method": "bdev_nvme_attach_controller" 00:20:47.799 } 00:20:47.799 EOF 00:20:47.799 )") 00:20:47.799 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:47.799 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:47.799 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:47.799 16:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.799 "params": { 00:20:47.799 "name": "Nvme1", 00:20:47.799 "trtype": "tcp", 00:20:47.799 "traddr": "10.0.0.2", 00:20:47.799 "adrfam": "ipv4", 00:20:47.799 "trsvcid": "4420", 00:20:47.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.799 "hdgst": false, 00:20:47.799 "ddgst": false 00:20:47.799 }, 00:20:47.799 "method": "bdev_nvme_attach_controller" 00:20:47.799 }' 00:20:47.799 [2024-06-07 16:29:14.496003] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:20:47.799 [2024-06-07 16:29:14.496072] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3125870 ] 00:20:47.799 [2024-06-07 16:29:14.572354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.059 [2024-06-07 16:29:14.668707] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.059 [2024-06-07 16:29:14.668822] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.059 [2024-06-07 16:29:14.668825] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.059 I/O targets: 00:20:48.059 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:48.059 00:20:48.059 00:20:48.059 CUnit - A unit testing framework for C - Version 2.1-3 00:20:48.059 http://cunit.sourceforge.net/ 00:20:48.059 00:20:48.059 00:20:48.059 Suite: bdevio tests on: Nvme1n1 00:20:48.059 Test: blockdev write read block ...passed 00:20:48.320 Test: blockdev write zeroes read block ...passed 00:20:48.320 Test: blockdev write zeroes read no split ...passed 00:20:48.320 Test: blockdev write zeroes read split ...passed 00:20:48.320 Test: blockdev write zeroes read split partial ...passed 00:20:48.320 Test: blockdev reset ...[2024-06-07 16:29:15.073577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:48.320 [2024-06-07 16:29:15.073636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66baf0 (9): Bad file descriptor 00:20:48.320 [2024-06-07 16:29:15.094039] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:48.320 passed 00:20:48.320 Test: blockdev write read 8 blocks ...passed 00:20:48.320 Test: blockdev write read size > 128k ...passed 00:20:48.320 Test: blockdev write read invalid size ...passed 00:20:48.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:48.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:48.320 Test: blockdev write read max offset ...passed 00:20:48.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:48.581 Test: blockdev writev readv 8 blocks ...passed 00:20:48.581 Test: blockdev writev readv 30 x 1block ...passed 00:20:48.581 Test: blockdev writev readv block ...passed 00:20:48.581 Test: blockdev writev readv size > 128k ...passed 00:20:48.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:48.581 Test: blockdev comparev and writev ...[2024-06-07 16:29:15.321790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.321830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.321836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.322351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.322361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.322370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.322375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.322899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.322908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.322917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.322923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.323452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.323461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.323470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.581 [2024-06-07 16:29:15.323475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:48.581 passed 00:20:48.581 Test: blockdev nvme passthru rw ...passed 00:20:48.581 Test: blockdev nvme passthru vendor specific ...[2024-06-07 16:29:15.408386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.581 [2024-06-07 16:29:15.408397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.408824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.581 [2024-06-07 16:29:15.408832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.409223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.581 [2024-06-07 16:29:15.409231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:48.581 [2024-06-07 16:29:15.409664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.581 [2024-06-07 16:29:15.409672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:48.581 passed 00:20:48.581 Test: blockdev nvme admin passthru ...passed 00:20:48.843 Test: blockdev copy ...passed 00:20:48.843 00:20:48.843 Run Summary: Type Total Ran Passed Failed Inactive 00:20:48.843 suites 1 1 n/a 0 0 00:20:48.843 tests 23 23 23 0 0 00:20:48.843 asserts 152 152 152 0 n/a 00:20:48.843 00:20:48.843 Elapsed time = 1.224 seconds 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.105 rmmod nvme_tcp 00:20:49.105 rmmod nvme_fabrics 00:20:49.105 rmmod nvme_keyring 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3125522 ']' 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3125522 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 3125522 ']' 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 3125522 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3125522 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3125522' 00:20:49.105 killing process with pid 3125522 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 3125522 00:20:49.105 16:29:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 3125522 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.678 16:29:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.594 16:29:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.594 00:20:51.594 real 0m11.794s 00:20:51.594 user 0m13.875s 00:20:51.594 sys 0m6.016s 00:20:51.594 16:29:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:51.594 16:29:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:51.594 ************************************ 00:20:51.594 END TEST nvmf_bdevio_no_huge 00:20:51.594 ************************************ 00:20:51.594 16:29:18 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:51.594 16:29:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:51.594 16:29:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:51.594 16:29:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.594 ************************************ 00:20:51.594 START TEST nvmf_tls 00:20:51.594 ************************************ 00:20:51.594 16:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:51.594 * Looking for test storage... 00:20:51.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.891 16:29:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.892 16:29:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.524 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:58.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:58.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:58.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:58.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:20:58.525 00:20:58.525 --- 10.0.0.2 ping statistics --- 00:20:58.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.525 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:20:58.525 00:20:58.525 --- 10.0.0.1 ping statistics --- 00:20:58.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.525 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3130203 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3130203 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3130203 ']' 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:58.525 16:29:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.786 [2024-06-07 16:29:25.425217] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:20:58.786 [2024-06-07 16:29:25.425278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.786 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.786 [2024-06-07 16:29:25.497533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.786 [2024-06-07 16:29:25.589898] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.786 [2024-06-07 16:29:25.589957] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.786 [2024-06-07 16:29:25.589966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.786 [2024-06-07 16:29:25.589973] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.786 [2024-06-07 16:29:25.589979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.786 [2024-06-07 16:29:25.590006] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:59.730 true 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:59.730 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:59.991 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:59.991 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:59.991 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:59.991 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:59.991 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:00.252 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:00.252 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:00.252 16:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:00.252 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:00.252 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:00.513 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:00.513 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:00.513 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:00.513 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:00.774 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:00.774 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:00.774 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:00.774 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:00.774 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:01.035 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:01.035 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:01.035 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:01.297 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:01.297 16:29:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # local prefix key digest 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # digest=1 00:21:01.297 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@711 -- # python - 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # local prefix key digest 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # key=ffeeddccbbaa99887766554433221100 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # digest=1 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@711 -- # python - 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.GDZ8rZupnu 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.lWjlSLdNjl 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.GDZ8rZupnu 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lWjlSLdNjl 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:01.558 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:01.819 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.GDZ8rZupnu 00:21:01.819 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GDZ8rZupnu 00:21:01.819 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:02.079 [2024-06-07 16:29:28.777597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.079 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.339 16:29:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.339 [2024-06-07 16:29:29.082339] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.339 [2024-06-07 16:29:29.082517] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.339 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.600 malloc0 00:21:02.600 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.601 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GDZ8rZupnu 00:21:02.862 [2024-06-07 16:29:29.541502] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.862 16:29:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GDZ8rZupnu 00:21:02.862 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.860 Initializing NVMe Controllers 00:21:12.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.861 Initialization complete. Launching workers. 00:21:12.861 ======================================================== 00:21:12.861 Latency(us) 00:21:12.861 Device Information : IOPS MiB/s Average min max 00:21:12.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18905.95 73.85 3385.20 1152.52 4017.02 00:21:12.861 ======================================================== 00:21:12.861 Total : 18905.95 73.85 3385.20 1152.52 4017.02 00:21:12.861 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDZ8rZupnu 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GDZ8rZupnu' 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3132937 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3132937 /var/tmp/bdevperf.sock 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3132937 ']' 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:12.861 16:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.121 [2024-06-07 16:29:39.718893] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:13.121 [2024-06-07 16:29:39.718959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132937 ] 00:21:13.121 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.121 [2024-06-07 16:29:39.768994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.121 [2024-06-07 16:29:39.821249] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.691 16:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:13.691 16:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:13.691 16:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GDZ8rZupnu 00:21:13.949 [2024-06-07 16:29:40.605950] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.949 [2024-06-07 16:29:40.606012] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:13.949 TLSTESTn1 00:21:13.949 16:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:13.949 Running I/O for 10 seconds... 00:21:26.203 00:21:26.203 Latency(us) 00:21:26.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.203 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:26.203 Verification LBA range: start 0x0 length 0x2000 00:21:26.203 TLSTESTn1 : 10.02 4797.66 18.74 0.00 0.00 26635.66 4532.91 44346.03 00:21:26.203 =================================================================================================================== 00:21:26.203 Total : 4797.66 18.74 0.00 0.00 26635.66 4532.91 44346.03 00:21:26.203 0 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3132937 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3132937 ']' 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3132937 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3132937 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3132937' 00:21:26.203 killing process with pid 3132937 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3132937 00:21:26.203 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.203 00:21:26.203 Latency(us) 00:21:26.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.203 =================================================================================================================== 00:21:26.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.203 [2024-06-07 16:29:50.898543] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:26.203 16:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3132937 00:21:26.203 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWjlSLdNjl 00:21:26.203 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:26.203 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWjlSLdNjl 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWjlSLdNjl 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lWjlSLdNjl' 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3135234 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3135234 /var/tmp/bdevperf.sock 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3135234 ']' 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.204 [2024-06-07 16:29:51.062899] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:26.204 [2024-06-07 16:29:51.062952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135234 ] 00:21:26.204 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.204 [2024-06-07 16:29:51.111881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.204 [2024-06-07 16:29:51.163262] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWjlSLdNjl 00:21:26.204 [2024-06-07 16:29:51.944008] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.204 [2024-06-07 16:29:51.944062] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:26.204 [2024-06-07 16:29:51.953977] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:26.204 [2024-06-07 16:29:51.954094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07de0 (107): Transport endpoint is not connected 00:21:26.204 [2024-06-07 16:29:51.955065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07de0 (9): Bad file descriptor 00:21:26.204 [2024-06-07 16:29:51.956066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.204 [2024-06-07 16:29:51.956074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:26.204 [2024-06-07 16:29:51.956081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.204 request: 00:21:26.204 { 00:21:26.204 "name": "TLSTEST", 00:21:26.204 "trtype": "tcp", 00:21:26.204 "traddr": "10.0.0.2", 00:21:26.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.204 "adrfam": "ipv4", 00:21:26.204 "trsvcid": "4420", 00:21:26.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.204 "psk": "/tmp/tmp.lWjlSLdNjl", 00:21:26.204 "method": "bdev_nvme_attach_controller", 00:21:26.204 "req_id": 1 00:21:26.204 } 00:21:26.204 Got JSON-RPC error response 00:21:26.204 response: 00:21:26.204 { 00:21:26.204 "code": -5, 00:21:26.204 "message": "Input/output error" 00:21:26.204 } 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3135234 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3135234 ']' 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3135234 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:26.204 16:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3135234 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3135234' 00:21:26.204 killing process with pid 3135234 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3135234 00:21:26.204 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.204 00:21:26.204 Latency(us) 00:21:26.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.204 =================================================================================================================== 00:21:26.204 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:26.204 [2024-06-07 16:29:52.029209] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3135234 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GDZ8rZupnu 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GDZ8rZupnu 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GDZ8rZupnu 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GDZ8rZupnu' 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3135305 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3135305 /var/tmp/bdevperf.sock 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3135305 ']' 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.204 [2024-06-07 16:29:52.183675] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:26.204 [2024-06-07 16:29:52.183728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135305 ] 00:21:26.204 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.204 [2024-06-07 16:29:52.232831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.204 [2024-06-07 16:29:52.284749] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:26.204 16:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:26.205 16:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.GDZ8rZupnu 00:21:26.466 [2024-06-07 16:29:53.081893] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.466 [2024-06-07 16:29:53.081947] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:26.466 [2024-06-07 16:29:53.086663] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:26.466 [2024-06-07 16:29:53.086681] posix.c: 591:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:26.466 [2024-06-07 16:29:53.086700] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:26.466 [2024-06-07 16:29:53.086969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29de0 (107): Transport endpoint is not connected 00:21:26.466 [2024-06-07 16:29:53.087963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29de0 (9): Bad file descriptor 00:21:26.466 [2024-06-07 16:29:53.088965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.466 [2024-06-07 16:29:53.088973] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:26.466 [2024-06-07 16:29:53.088979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.466 request: 00:21:26.466 { 00:21:26.466 "name": "TLSTEST", 00:21:26.466 "trtype": "tcp", 00:21:26.466 "traddr": "10.0.0.2", 00:21:26.466 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:26.466 "adrfam": "ipv4", 00:21:26.466 "trsvcid": "4420", 00:21:26.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.466 "psk": "/tmp/tmp.GDZ8rZupnu", 00:21:26.466 "method": "bdev_nvme_attach_controller", 00:21:26.466 "req_id": 1 00:21:26.466 } 00:21:26.466 Got JSON-RPC error response 00:21:26.466 response: 00:21:26.466 { 00:21:26.466 "code": -5, 00:21:26.466 "message": "Input/output error" 00:21:26.466 } 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3135305 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3135305 ']' 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3135305 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3135305 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3135305' 00:21:26.466 killing process with pid 3135305 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3135305 00:21:26.466 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.466 00:21:26.466 Latency(us) 00:21:26.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.466 =================================================================================================================== 00:21:26.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:26.466 [2024-06-07 16:29:53.173742] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3135305 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDZ8rZupnu 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDZ8rZupnu 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDZ8rZupnu 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GDZ8rZupnu' 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3135629 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3135629 /var/tmp/bdevperf.sock 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3135629 ']' 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.466 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:26.467 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.467 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:26.467 16:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.726 [2024-06-07 16:29:53.328303] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:26.727 [2024-06-07 16:29:53.328352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135629 ] 00:21:26.727 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.727 [2024-06-07 16:29:53.378313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.727 [2024-06-07 16:29:53.429064] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.298 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:27.298 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:27.298 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GDZ8rZupnu 00:21:27.560 [2024-06-07 16:29:54.241793] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.560 [2024-06-07 16:29:54.241861] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.560 [2024-06-07 16:29:54.249475] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:27.560 [2024-06-07 16:29:54.249493] posix.c: 591:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:27.560 [2024-06-07 16:29:54.249511] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:27.560 [2024-06-07 16:29:54.250039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cade0 (107): Transport endpoint is not connected 00:21:27.560 [2024-06-07 16:29:54.251035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cade0 (9): Bad file descriptor 00:21:27.560 [2024-06-07 16:29:54.252036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:27.560 [2024-06-07 16:29:54.252044] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:27.560 [2024-06-07 16:29:54.252051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:27.560 request: 00:21:27.560 { 00:21:27.560 "name": "TLSTEST", 00:21:27.560 "trtype": "tcp", 00:21:27.560 "traddr": "10.0.0.2", 00:21:27.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.560 "adrfam": "ipv4", 00:21:27.560 "trsvcid": "4420", 00:21:27.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.560 "psk": "/tmp/tmp.GDZ8rZupnu", 00:21:27.560 "method": "bdev_nvme_attach_controller", 00:21:27.560 "req_id": 1 00:21:27.560 } 00:21:27.560 Got JSON-RPC error response 00:21:27.560 response: 00:21:27.560 { 00:21:27.560 "code": -5, 00:21:27.560 "message": "Input/output error" 00:21:27.560 } 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3135629 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3135629 ']' 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3135629 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3135629 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3135629' 00:21:27.560 killing process with pid 3135629 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3135629 00:21:27.560 Received shutdown signal, test time was about 10.000000 seconds 00:21:27.560 00:21:27.560 Latency(us) 00:21:27.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.560 =================================================================================================================== 00:21:27.560 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:27.560 [2024-06-07 16:29:54.336957] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:27.560 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3135629 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3135927 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3135927 /var/tmp/bdevperf.sock 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3135927 ']' 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:27.820 16:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.820 [2024-06-07 16:29:54.501603] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:27.820 [2024-06-07 16:29:54.501675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135927 ] 00:21:27.820 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.820 [2024-06-07 16:29:54.551880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.820 [2024-06-07 16:29:54.602858] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:28.762 [2024-06-07 16:29:55.412359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:28.762 [2024-06-07 16:29:55.414277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2435820 (9): Bad file descriptor 00:21:28.762 [2024-06-07 16:29:55.415276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:28.762 [2024-06-07 16:29:55.415285] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:28.762 [2024-06-07 16:29:55.415292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:28.762 request: 00:21:28.762 { 00:21:28.762 "name": "TLSTEST", 00:21:28.762 "trtype": "tcp", 00:21:28.762 "traddr": "10.0.0.2", 00:21:28.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.762 "adrfam": "ipv4", 00:21:28.762 "trsvcid": "4420", 00:21:28.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.762 "method": "bdev_nvme_attach_controller", 00:21:28.762 "req_id": 1 00:21:28.762 } 00:21:28.762 Got JSON-RPC error response 00:21:28.762 response: 00:21:28.762 { 00:21:28.762 "code": -5, 00:21:28.762 "message": "Input/output error" 00:21:28.762 } 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3135927 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3135927 ']' 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3135927 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3135927 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3135927' 00:21:28.762 killing process with pid 3135927 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3135927 00:21:28.762 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.762 00:21:28.762 Latency(us) 00:21:28.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.762 =================================================================================================================== 00:21:28.762 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3135927 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3130203 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3130203 ']' 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3130203 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.762 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3130203 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3130203' 00:21:29.023 killing process with pid 3130203 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3130203 00:21:29.023 [2024-06-07 16:29:55.651300] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3130203 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@708 -- # local prefix key digest 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@710 -- # digest=2 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@711 -- # python - 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.hJNDrTDA8u 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.hJNDrTDA8u 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3136139 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3136139 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3136139 ']' 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:29.023 16:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.284 [2024-06-07 16:29:55.884642] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:29.284 [2024-06-07 16:29:55.884699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.284 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.284 [2024-06-07 16:29:55.965868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.284 [2024-06-07 16:29:56.018805] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.284 [2024-06-07 16:29:56.018836] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.284 [2024-06-07 16:29:56.018841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.284 [2024-06-07 16:29:56.018845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.284 [2024-06-07 16:29:56.018849] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.284 [2024-06-07 16:29:56.018867] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.hJNDrTDA8u 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hJNDrTDA8u 00:21:29.855 16:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:30.116 [2024-06-07 16:29:56.800446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.116 16:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:30.376 16:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:30.376 [2024-06-07 16:29:57.113210] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.376 [2024-06-07 16:29:57.113395] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.376 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:30.637 malloc0 00:21:30.637 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:30.637 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:21:30.898 [2024-06-07 16:29:57.560351] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hJNDrTDA8u 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hJNDrTDA8u' 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3136495 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3136495 /var/tmp/bdevperf.sock 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3136495 ']' 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:30.898 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.898 [2024-06-07 16:29:57.623898] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:30.898 [2024-06-07 16:29:57.623949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136495 ] 00:21:30.898 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.898 [2024-06-07 16:29:57.673030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.898 [2024-06-07 16:29:57.725639] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.840 16:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:31.840 16:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:31.840 16:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:21:31.840 [2024-06-07 16:29:58.494450] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.840 [2024-06-07 16:29:58.494505] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.840 TLSTESTn1 00:21:31.840 16:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.840 Running I/O for 10 seconds... 00:21:44.075 00:21:44.075 Latency(us) 00:21:44.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.075 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.075 Verification LBA range: start 0x0 length 0x2000 00:21:44.075 TLSTESTn1 : 10.02 4075.20 15.92 0.00 0.00 31361.76 4669.44 90876.59 00:21:44.075 =================================================================================================================== 00:21:44.075 Total : 4075.20 15.92 0.00 0.00 31361.76 4669.44 90876.59 00:21:44.075 0 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3136495 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3136495 ']' 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3136495 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3136495 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3136495' 00:21:44.075 killing process with pid 3136495 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3136495 00:21:44.075 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.075 00:21:44.075 Latency(us) 00:21:44.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.075 =================================================================================================================== 00:21:44.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.075 [2024-06-07 16:30:08.802073] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3136495 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.hJNDrTDA8u 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hJNDrTDA8u 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hJNDrTDA8u 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hJNDrTDA8u 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hJNDrTDA8u' 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3138804 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3138804 /var/tmp/bdevperf.sock 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3138804 ']' 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:44.075 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.075 [2024-06-07 16:30:08.971244] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:44.075 [2024-06-07 16:30:08.971297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138804 ] 00:21:44.075 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.075 [2024-06-07 16:30:09.020052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.075 [2024-06-07 16:30:09.072016] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:21:44.075 [2024-06-07 16:30:09.868870] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.075 [2024-06-07 16:30:09.868904] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:44.075 [2024-06-07 16:30:09.868909] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.hJNDrTDA8u 00:21:44.075 request: 00:21:44.075 { 00:21:44.075 "name": "TLSTEST", 00:21:44.075 "trtype": "tcp", 00:21:44.075 "traddr": "10.0.0.2", 00:21:44.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.075 "adrfam": "ipv4", 00:21:44.075 "trsvcid": "4420", 00:21:44.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.075 "psk": "/tmp/tmp.hJNDrTDA8u", 00:21:44.075 "method": "bdev_nvme_attach_controller", 00:21:44.075 "req_id": 1 00:21:44.075 } 00:21:44.075 Got JSON-RPC error response 00:21:44.075 response: 00:21:44.075 { 00:21:44.075 "code": -1, 00:21:44.075 "message": "Operation not permitted" 00:21:44.075 } 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3138804 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3138804 ']' 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3138804 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3138804 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3138804' 00:21:44.075 killing process with pid 3138804 00:21:44.075 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3138804 00:21:44.075 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.075 00:21:44.075 Latency(us) 00:21:44.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.075 =================================================================================================================== 00:21:44.076 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.076 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3138804 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3136139 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3136139 ']' 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3136139 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3136139 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3136139' 00:21:44.076 killing process with pid 3136139 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3136139 00:21:44.076 [2024-06-07 16:30:10.101687] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3136139 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3139468 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3139468 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3139468 ']' 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:44.076 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.076 [2024-06-07 16:30:10.278359] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:44.076 [2024-06-07 16:30:10.278420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.076 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.076 [2024-06-07 16:30:10.358294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.076 [2024-06-07 16:30:10.411503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.076 [2024-06-07 16:30:10.411535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.076 [2024-06-07 16:30:10.411540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.076 [2024-06-07 16:30:10.411545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.076 [2024-06-07 16:30:10.411550] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.076 [2024-06-07 16:30:10.411563] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.336 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:44.336 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:44.336 16:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.336 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:44.336 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.336 16:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.hJNDrTDA8u 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hJNDrTDA8u 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.hJNDrTDA8u 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hJNDrTDA8u 00:21:44.337 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:44.598 [2024-06-07 16:30:11.221163] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.598 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:44.598 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:44.859 [2024-06-07 16:30:11.525918] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.859 [2024-06-07 16:30:11.526095] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.859 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:44.859 malloc0 00:21:44.859 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:45.120 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:21:45.381 [2024-06-07 16:30:11.984874] tcp.c:3595:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:45.381 [2024-06-07 16:30:11.984893] tcp.c:3681:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:45.381 [2024-06-07 16:30:11.984917] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:45.381 request: 00:21:45.381 { 00:21:45.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.381 "host": "nqn.2016-06.io.spdk:host1", 00:21:45.381 "psk": "/tmp/tmp.hJNDrTDA8u", 00:21:45.381 "method": "nvmf_subsystem_add_host", 00:21:45.381 "req_id": 1 00:21:45.381 } 00:21:45.381 Got JSON-RPC error response 00:21:45.381 response: 00:21:45.381 { 00:21:45.381 "code": -32603, 00:21:45.381 "message": "Internal error" 00:21:45.381 } 00:21:45.381 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:45.381 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:45.381 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3139468 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3139468 ']' 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3139468 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3139468 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3139468' 00:21:45.381 killing process with pid 3139468 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3139468 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3139468 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.hJNDrTDA8u 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3139986 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3139986 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3139986 ']' 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:45.381 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.642 [2024-06-07 16:30:12.241898] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:45.642 [2024-06-07 16:30:12.241954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.642 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.642 [2024-06-07 16:30:12.321953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.642 [2024-06-07 16:30:12.374658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.642 [2024-06-07 16:30:12.374688] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.642 [2024-06-07 16:30:12.374693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.642 [2024-06-07 16:30:12.374698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.642 [2024-06-07 16:30:12.374702] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.642 [2024-06-07 16:30:12.374718] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.212 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.hJNDrTDA8u 00:21:46.473 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hJNDrTDA8u 00:21:46.473 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:46.473 [2024-06-07 16:30:13.204414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.473 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:46.733 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:46.733 [2024-06-07 16:30:13.497127] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.733 [2024-06-07 16:30:13.497316] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.733 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:46.993 malloc0 00:21:46.993 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:46.993 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:21:47.254 [2024-06-07 16:30:13.932231] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3140345 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3140345 /var/tmp/bdevperf.sock 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3140345 ']' 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:47.254 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.254 [2024-06-07 16:30:13.995265] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:47.254 [2024-06-07 16:30:13.995312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140345 ] 00:21:47.254 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.254 [2024-06-07 16:30:14.044077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.254 [2024-06-07 16:30:14.096395] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.196 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:48.196 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:48.196 16:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:21:48.196 [2024-06-07 16:30:14.881201] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.196 [2024-06-07 16:30:14.881256] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:48.196 TLSTESTn1 00:21:48.196 16:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:48.457 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:48.457 "subsystems": [ 00:21:48.457 { 00:21:48.457 "subsystem": "keyring", 00:21:48.457 "config": [] 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "subsystem": "iobuf", 00:21:48.457 "config": [ 00:21:48.457 { 00:21:48.457 "method": "iobuf_set_options", 00:21:48.457 "params": { 00:21:48.457 "small_pool_count": 8192, 00:21:48.457 "large_pool_count": 1024, 00:21:48.457 "small_bufsize": 8192, 00:21:48.457 "large_bufsize": 135168 00:21:48.457 } 00:21:48.457 } 00:21:48.457 ] 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "subsystem": "sock", 00:21:48.457 "config": [ 00:21:48.457 { 00:21:48.457 "method": "sock_set_default_impl", 00:21:48.457 "params": { 00:21:48.457 "impl_name": "posix" 00:21:48.457 } 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "method": "sock_impl_set_options", 00:21:48.457 "params": { 00:21:48.457 "impl_name": "ssl", 00:21:48.457 "recv_buf_size": 4096, 00:21:48.457 "send_buf_size": 4096, 00:21:48.457 "enable_recv_pipe": true, 00:21:48.457 "enable_quickack": false, 00:21:48.457 "enable_placement_id": 0, 00:21:48.457 "enable_zerocopy_send_server": true, 00:21:48.457 "enable_zerocopy_send_client": false, 00:21:48.457 "zerocopy_threshold": 0, 00:21:48.457 "tls_version": 0, 00:21:48.457 "enable_ktls": false, 00:21:48.457 "enable_new_session_tickets": true 00:21:48.457 } 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "method": "sock_impl_set_options", 00:21:48.457 "params": { 00:21:48.457 "impl_name": "posix", 00:21:48.457 "recv_buf_size": 2097152, 00:21:48.457 "send_buf_size": 2097152, 00:21:48.457 "enable_recv_pipe": true, 00:21:48.457 "enable_quickack": false, 00:21:48.457 "enable_placement_id": 0, 00:21:48.457 "enable_zerocopy_send_server": true, 00:21:48.457 "enable_zerocopy_send_client": false, 00:21:48.457 "zerocopy_threshold": 0, 00:21:48.457 "tls_version": 0, 00:21:48.457 "enable_ktls": false, 00:21:48.457 "enable_new_session_tickets": false 00:21:48.457 } 00:21:48.457 } 00:21:48.457 ] 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "subsystem": "vmd", 00:21:48.457 "config": [] 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "subsystem": "accel", 00:21:48.457 "config": [ 00:21:48.457 { 00:21:48.457 "method": "accel_set_options", 00:21:48.457 "params": { 00:21:48.457 "small_cache_size": 128, 00:21:48.457 "large_cache_size": 16, 00:21:48.457 "task_count": 2048, 00:21:48.457 "sequence_count": 2048, 00:21:48.457 "buf_count": 2048 00:21:48.457 } 00:21:48.457 } 00:21:48.457 ] 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "subsystem": "bdev", 00:21:48.457 "config": [ 00:21:48.457 { 00:21:48.457 "method": "bdev_set_options", 00:21:48.457 "params": { 00:21:48.457 "bdev_io_pool_size": 65535, 00:21:48.457 "bdev_io_cache_size": 256, 00:21:48.457 "bdev_auto_examine": true, 00:21:48.457 "iobuf_small_cache_size": 128, 00:21:48.457 "iobuf_large_cache_size": 16 00:21:48.457 } 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "method": "bdev_raid_set_options", 00:21:48.457 "params": { 00:21:48.457 "process_window_size_kb": 1024 00:21:48.457 } 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "method": "bdev_iscsi_set_options", 00:21:48.457 "params": { 00:21:48.457 "timeout_sec": 30 00:21:48.457 } 00:21:48.457 }, 00:21:48.457 { 00:21:48.457 "method": "bdev_nvme_set_options", 00:21:48.457 "params": { 00:21:48.457 "action_on_timeout": "none", 00:21:48.457 "timeout_us": 0, 00:21:48.457 "timeout_admin_us": 0, 00:21:48.457 "keep_alive_timeout_ms": 10000, 00:21:48.457 "arbitration_burst": 0, 00:21:48.457 "low_priority_weight": 0, 00:21:48.457 "medium_priority_weight": 0, 00:21:48.457 "high_priority_weight": 0, 00:21:48.457 "nvme_adminq_poll_period_us": 10000, 00:21:48.457 "nvme_ioq_poll_period_us": 0, 00:21:48.457 "io_queue_requests": 0, 00:21:48.457 "delay_cmd_submit": true, 00:21:48.457 "transport_retry_count": 4, 00:21:48.457 "bdev_retry_count": 3, 00:21:48.457 "transport_ack_timeout": 0, 00:21:48.457 "ctrlr_loss_timeout_sec": 0, 00:21:48.457 "reconnect_delay_sec": 0, 00:21:48.457 "fast_io_fail_timeout_sec": 0, 00:21:48.457 "disable_auto_failback": false, 00:21:48.457 "generate_uuids": false, 00:21:48.457 "transport_tos": 0, 00:21:48.457 "nvme_error_stat": false, 00:21:48.457 "rdma_srq_size": 0, 00:21:48.458 "io_path_stat": false, 00:21:48.458 "allow_accel_sequence": false, 00:21:48.458 "rdma_max_cq_size": 0, 00:21:48.458 "rdma_cm_event_timeout_ms": 0, 00:21:48.458 "dhchap_digests": [ 00:21:48.458 "sha256", 00:21:48.458 "sha384", 00:21:48.458 "sha512" 00:21:48.458 ], 00:21:48.458 "dhchap_dhgroups": [ 00:21:48.458 "null", 00:21:48.458 "ffdhe2048", 00:21:48.458 "ffdhe3072", 00:21:48.458 "ffdhe4096", 00:21:48.458 "ffdhe6144", 00:21:48.458 "ffdhe8192" 00:21:48.458 ] 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "bdev_nvme_set_hotplug", 00:21:48.458 "params": { 00:21:48.458 "period_us": 100000, 00:21:48.458 "enable": false 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "bdev_malloc_create", 00:21:48.458 "params": { 00:21:48.458 "name": "malloc0", 00:21:48.458 "num_blocks": 8192, 00:21:48.458 "block_size": 4096, 00:21:48.458 "physical_block_size": 4096, 00:21:48.458 "uuid": "9e7c87de-9c0c-435b-b9b6-c8af46e5821c", 00:21:48.458 "optimal_io_boundary": 0 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "bdev_wait_for_examine" 00:21:48.458 } 00:21:48.458 ] 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "subsystem": "nbd", 00:21:48.458 "config": [] 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "subsystem": "scheduler", 00:21:48.458 "config": [ 00:21:48.458 { 00:21:48.458 "method": "framework_set_scheduler", 00:21:48.458 "params": { 00:21:48.458 "name": "static" 00:21:48.458 } 00:21:48.458 } 00:21:48.458 ] 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "subsystem": "nvmf", 00:21:48.458 "config": [ 00:21:48.458 { 00:21:48.458 "method": "nvmf_set_config", 00:21:48.458 "params": { 00:21:48.458 "discovery_filter": "match_any", 00:21:48.458 "admin_cmd_passthru": { 00:21:48.458 "identify_ctrlr": false 00:21:48.458 } 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_set_max_subsystems", 00:21:48.458 "params": { 00:21:48.458 "max_subsystems": 1024 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_set_crdt", 00:21:48.458 "params": { 00:21:48.458 "crdt1": 0, 00:21:48.458 "crdt2": 0, 00:21:48.458 "crdt3": 0 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_create_transport", 00:21:48.458 "params": { 00:21:48.458 "trtype": "TCP", 00:21:48.458 "max_queue_depth": 128, 00:21:48.458 "max_io_qpairs_per_ctrlr": 127, 00:21:48.458 "in_capsule_data_size": 4096, 00:21:48.458 "max_io_size": 131072, 00:21:48.458 "io_unit_size": 131072, 00:21:48.458 "max_aq_depth": 128, 00:21:48.458 "num_shared_buffers": 511, 00:21:48.458 "buf_cache_size": 4294967295, 00:21:48.458 "dif_insert_or_strip": false, 00:21:48.458 "zcopy": false, 00:21:48.458 "c2h_success": false, 00:21:48.458 "sock_priority": 0, 00:21:48.458 "abort_timeout_sec": 1, 00:21:48.458 "ack_timeout": 0, 00:21:48.458 "data_wr_pool_size": 0 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_create_subsystem", 00:21:48.458 "params": { 00:21:48.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.458 "allow_any_host": false, 00:21:48.458 "serial_number": "SPDK00000000000001", 00:21:48.458 "model_number": "SPDK bdev Controller", 00:21:48.458 "max_namespaces": 10, 00:21:48.458 "min_cntlid": 1, 00:21:48.458 "max_cntlid": 65519, 00:21:48.458 "ana_reporting": false 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_subsystem_add_host", 00:21:48.458 "params": { 00:21:48.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.458 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.458 "psk": "/tmp/tmp.hJNDrTDA8u" 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_subsystem_add_ns", 00:21:48.458 "params": { 00:21:48.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.458 "namespace": { 00:21:48.458 "nsid": 1, 00:21:48.458 "bdev_name": "malloc0", 00:21:48.458 "nguid": "9E7C87DE9C0C435BB9B6C8AF46E5821C", 00:21:48.458 "uuid": "9e7c87de-9c0c-435b-b9b6-c8af46e5821c", 00:21:48.458 "no_auto_visible": false 00:21:48.458 } 00:21:48.458 } 00:21:48.458 }, 00:21:48.458 { 00:21:48.458 "method": "nvmf_subsystem_add_listener", 00:21:48.458 "params": { 00:21:48.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.458 "listen_address": { 00:21:48.458 "trtype": "TCP", 00:21:48.458 "adrfam": "IPv4", 00:21:48.458 "traddr": "10.0.0.2", 00:21:48.458 "trsvcid": "4420" 00:21:48.458 }, 00:21:48.458 "secure_channel": true 00:21:48.458 } 00:21:48.458 } 00:21:48.458 ] 00:21:48.458 } 00:21:48.458 ] 00:21:48.458 }' 00:21:48.458 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:48.719 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:48.719 "subsystems": [ 00:21:48.719 { 00:21:48.719 "subsystem": "keyring", 00:21:48.719 "config": [] 00:21:48.719 }, 00:21:48.719 { 00:21:48.719 "subsystem": "iobuf", 00:21:48.719 "config": [ 00:21:48.719 { 00:21:48.719 "method": "iobuf_set_options", 00:21:48.719 "params": { 00:21:48.719 "small_pool_count": 8192, 00:21:48.719 "large_pool_count": 1024, 00:21:48.719 "small_bufsize": 8192, 00:21:48.719 "large_bufsize": 135168 00:21:48.719 } 00:21:48.719 } 00:21:48.719 ] 00:21:48.719 }, 00:21:48.719 { 00:21:48.719 "subsystem": "sock", 00:21:48.719 "config": [ 00:21:48.719 { 00:21:48.719 "method": "sock_set_default_impl", 00:21:48.719 "params": { 00:21:48.720 "impl_name": "posix" 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "sock_impl_set_options", 00:21:48.720 "params": { 00:21:48.720 "impl_name": "ssl", 00:21:48.720 "recv_buf_size": 4096, 00:21:48.720 "send_buf_size": 4096, 00:21:48.720 "enable_recv_pipe": true, 00:21:48.720 "enable_quickack": false, 00:21:48.720 "enable_placement_id": 0, 00:21:48.720 "enable_zerocopy_send_server": true, 00:21:48.720 "enable_zerocopy_send_client": false, 00:21:48.720 "zerocopy_threshold": 0, 00:21:48.720 "tls_version": 0, 00:21:48.720 "enable_ktls": false, 00:21:48.720 "enable_new_session_tickets": true 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "sock_impl_set_options", 00:21:48.720 "params": { 00:21:48.720 "impl_name": "posix", 00:21:48.720 "recv_buf_size": 2097152, 00:21:48.720 "send_buf_size": 2097152, 00:21:48.720 "enable_recv_pipe": true, 00:21:48.720 "enable_quickack": false, 00:21:48.720 "enable_placement_id": 0, 00:21:48.720 "enable_zerocopy_send_server": true, 00:21:48.720 "enable_zerocopy_send_client": false, 00:21:48.720 "zerocopy_threshold": 0, 00:21:48.720 "tls_version": 0, 00:21:48.720 "enable_ktls": false, 00:21:48.720 "enable_new_session_tickets": false 00:21:48.720 } 00:21:48.720 } 00:21:48.720 ] 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "subsystem": "vmd", 00:21:48.720 "config": [] 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "subsystem": "accel", 00:21:48.720 "config": [ 00:21:48.720 { 00:21:48.720 "method": "accel_set_options", 00:21:48.720 "params": { 00:21:48.720 "small_cache_size": 128, 00:21:48.720 "large_cache_size": 16, 00:21:48.720 "task_count": 2048, 00:21:48.720 "sequence_count": 2048, 00:21:48.720 "buf_count": 2048 00:21:48.720 } 00:21:48.720 } 00:21:48.720 ] 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "subsystem": "bdev", 00:21:48.720 "config": [ 00:21:48.720 { 00:21:48.720 "method": "bdev_set_options", 00:21:48.720 "params": { 00:21:48.720 "bdev_io_pool_size": 65535, 00:21:48.720 "bdev_io_cache_size": 256, 00:21:48.720 "bdev_auto_examine": true, 00:21:48.720 "iobuf_small_cache_size": 128, 00:21:48.720 "iobuf_large_cache_size": 16 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "bdev_raid_set_options", 00:21:48.720 "params": { 00:21:48.720 "process_window_size_kb": 1024 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "bdev_iscsi_set_options", 00:21:48.720 "params": { 00:21:48.720 "timeout_sec": 30 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "bdev_nvme_set_options", 00:21:48.720 "params": { 00:21:48.720 "action_on_timeout": "none", 00:21:48.720 "timeout_us": 0, 00:21:48.720 "timeout_admin_us": 0, 00:21:48.720 "keep_alive_timeout_ms": 10000, 00:21:48.720 "arbitration_burst": 0, 00:21:48.720 "low_priority_weight": 0, 00:21:48.720 "medium_priority_weight": 0, 00:21:48.720 "high_priority_weight": 0, 00:21:48.720 "nvme_adminq_poll_period_us": 10000, 00:21:48.720 "nvme_ioq_poll_period_us": 0, 00:21:48.720 "io_queue_requests": 512, 00:21:48.720 "delay_cmd_submit": true, 00:21:48.720 "transport_retry_count": 4, 00:21:48.720 "bdev_retry_count": 3, 00:21:48.720 "transport_ack_timeout": 0, 00:21:48.720 "ctrlr_loss_timeout_sec": 0, 00:21:48.720 "reconnect_delay_sec": 0, 00:21:48.720 "fast_io_fail_timeout_sec": 0, 00:21:48.720 "disable_auto_failback": false, 00:21:48.720 "generate_uuids": false, 00:21:48.720 "transport_tos": 0, 00:21:48.720 "nvme_error_stat": false, 00:21:48.720 "rdma_srq_size": 0, 00:21:48.720 "io_path_stat": false, 00:21:48.720 "allow_accel_sequence": false, 00:21:48.720 "rdma_max_cq_size": 0, 00:21:48.720 "rdma_cm_event_timeout_ms": 0, 00:21:48.720 "dhchap_digests": [ 00:21:48.720 "sha256", 00:21:48.720 "sha384", 00:21:48.720 "sha512" 00:21:48.720 ], 00:21:48.720 "dhchap_dhgroups": [ 00:21:48.720 "null", 00:21:48.720 "ffdhe2048", 00:21:48.720 "ffdhe3072", 00:21:48.720 "ffdhe4096", 00:21:48.720 "ffdhe6144", 00:21:48.720 "ffdhe8192" 00:21:48.720 ] 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "bdev_nvme_attach_controller", 00:21:48.720 "params": { 00:21:48.720 "name": "TLSTEST", 00:21:48.720 "trtype": "TCP", 00:21:48.720 "adrfam": "IPv4", 00:21:48.720 "traddr": "10.0.0.2", 00:21:48.720 "trsvcid": "4420", 00:21:48.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.720 "prchk_reftag": false, 00:21:48.720 "prchk_guard": false, 00:21:48.720 "ctrlr_loss_timeout_sec": 0, 00:21:48.720 "reconnect_delay_sec": 0, 00:21:48.720 "fast_io_fail_timeout_sec": 0, 00:21:48.720 "psk": "/tmp/tmp.hJNDrTDA8u", 00:21:48.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.720 "hdgst": false, 00:21:48.720 "ddgst": false 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "bdev_nvme_set_hotplug", 00:21:48.720 "params": { 00:21:48.720 "period_us": 100000, 00:21:48.720 "enable": false 00:21:48.720 } 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "method": "bdev_wait_for_examine" 00:21:48.720 } 00:21:48.720 ] 00:21:48.720 }, 00:21:48.720 { 00:21:48.720 "subsystem": "nbd", 00:21:48.720 "config": [] 00:21:48.720 } 00:21:48.720 ] 00:21:48.720 }' 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3140345 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3140345 ']' 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3140345 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3140345 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3140345' 00:21:48.720 killing process with pid 3140345 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3140345 00:21:48.720 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.720 00:21:48.720 Latency(us) 00:21:48.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.720 =================================================================================================================== 00:21:48.720 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.720 [2024-06-07 16:30:15.510728] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.720 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3140345 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3139986 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3139986 ']' 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3139986 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3139986 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3139986' 00:21:48.986 killing process with pid 3139986 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3139986 00:21:48.986 [2024-06-07 16:30:15.675465] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3139986 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.986 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:48.986 "subsystems": [ 00:21:48.986 { 00:21:48.986 "subsystem": "keyring", 00:21:48.986 "config": [] 00:21:48.986 }, 00:21:48.986 { 00:21:48.986 "subsystem": "iobuf", 00:21:48.986 "config": [ 00:21:48.986 { 00:21:48.986 "method": "iobuf_set_options", 00:21:48.986 "params": { 00:21:48.986 "small_pool_count": 8192, 00:21:48.986 "large_pool_count": 1024, 00:21:48.986 "small_bufsize": 8192, 00:21:48.986 "large_bufsize": 135168 00:21:48.986 } 00:21:48.986 } 00:21:48.986 ] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "sock", 00:21:48.987 "config": [ 00:21:48.987 { 00:21:48.987 "method": "sock_set_default_impl", 00:21:48.987 "params": { 00:21:48.987 "impl_name": "posix" 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "sock_impl_set_options", 00:21:48.987 "params": { 00:21:48.987 "impl_name": "ssl", 00:21:48.987 "recv_buf_size": 4096, 00:21:48.987 "send_buf_size": 4096, 00:21:48.987 "enable_recv_pipe": true, 00:21:48.987 "enable_quickack": false, 00:21:48.987 "enable_placement_id": 0, 00:21:48.987 "enable_zerocopy_send_server": true, 00:21:48.987 "enable_zerocopy_send_client": false, 00:21:48.987 "zerocopy_threshold": 0, 00:21:48.987 "tls_version": 0, 00:21:48.987 "enable_ktls": false, 00:21:48.987 "enable_new_session_tickets": true 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "sock_impl_set_options", 00:21:48.987 "params": { 00:21:48.987 "impl_name": "posix", 00:21:48.987 "recv_buf_size": 2097152, 00:21:48.987 "send_buf_size": 2097152, 00:21:48.987 "enable_recv_pipe": true, 00:21:48.987 "enable_quickack": false, 00:21:48.987 "enable_placement_id": 0, 00:21:48.987 "enable_zerocopy_send_server": true, 00:21:48.987 "enable_zerocopy_send_client": false, 00:21:48.987 "zerocopy_threshold": 0, 00:21:48.987 "tls_version": 0, 00:21:48.987 "enable_ktls": false, 00:21:48.987 "enable_new_session_tickets": false 00:21:48.987 } 00:21:48.987 } 00:21:48.987 ] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "vmd", 00:21:48.987 "config": [] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "accel", 00:21:48.987 "config": [ 00:21:48.987 { 00:21:48.987 "method": "accel_set_options", 00:21:48.987 "params": { 00:21:48.987 "small_cache_size": 128, 00:21:48.987 "large_cache_size": 16, 00:21:48.987 "task_count": 2048, 00:21:48.987 "sequence_count": 2048, 00:21:48.987 "buf_count": 2048 00:21:48.987 } 00:21:48.987 } 00:21:48.987 ] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "bdev", 00:21:48.987 "config": [ 00:21:48.987 { 00:21:48.987 "method": "bdev_set_options", 00:21:48.987 "params": { 00:21:48.987 "bdev_io_pool_size": 65535, 00:21:48.987 "bdev_io_cache_size": 256, 00:21:48.987 "bdev_auto_examine": true, 00:21:48.987 "iobuf_small_cache_size": 128, 00:21:48.987 "iobuf_large_cache_size": 16 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "bdev_raid_set_options", 00:21:48.987 "params": { 00:21:48.987 "process_window_size_kb": 1024 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "bdev_iscsi_set_options", 00:21:48.987 "params": { 00:21:48.987 "timeout_sec": 30 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "bdev_nvme_set_options", 00:21:48.987 "params": { 00:21:48.987 "action_on_timeout": "none", 00:21:48.987 "timeout_us": 0, 00:21:48.987 "timeout_admin_us": 0, 00:21:48.987 "keep_alive_timeout_ms": 10000, 00:21:48.987 "arbitration_burst": 0, 00:21:48.987 "low_priority_weight": 0, 00:21:48.987 "medium_priority_weight": 0, 00:21:48.987 "high_priority_weight": 0, 00:21:48.987 "nvme_adminq_poll_period_us": 10000, 00:21:48.987 "nvme_ioq_poll_period_us": 0, 00:21:48.987 "io_queue_requests": 0, 00:21:48.987 "delay_cmd_submit": true, 00:21:48.987 "transport_retry_count": 4, 00:21:48.987 "bdev_retry_count": 3, 00:21:48.987 "transport_ack_timeout": 0, 00:21:48.987 "ctrlr_loss_timeout_sec": 0, 00:21:48.987 "reconnect_delay_sec": 0, 00:21:48.987 "fast_io_fail_timeout_sec": 0, 00:21:48.987 "disable_auto_failback": false, 00:21:48.987 "generate_uuids": false, 00:21:48.987 "transport_tos": 0, 00:21:48.987 "nvme_error_stat": false, 00:21:48.987 "rdma_srq_size": 0, 00:21:48.987 "io_path_stat": false, 00:21:48.987 "allow_accel_sequence": false, 00:21:48.987 "rdma_max_cq_size": 0, 00:21:48.987 "rdma_cm_event_timeout_ms": 0, 00:21:48.987 "dhchap_digests": [ 00:21:48.987 "sha256", 00:21:48.987 "sha384", 00:21:48.987 "sha512" 00:21:48.987 ], 00:21:48.987 "dhchap_dhgroups": [ 00:21:48.987 "null", 00:21:48.987 "ffdhe2048", 00:21:48.987 "ffdhe3072", 00:21:48.987 "ffdhe4096", 00:21:48.987 "ffdhe6144", 00:21:48.987 "ffdhe8192" 00:21:48.987 ] 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "bdev_nvme_set_hotplug", 00:21:48.987 "params": { 00:21:48.987 "period_us": 100000, 00:21:48.987 "enable": false 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "bdev_malloc_create", 00:21:48.987 "params": { 00:21:48.987 "name": "malloc0", 00:21:48.987 "num_blocks": 8192, 00:21:48.987 "block_size": 4096, 00:21:48.987 "physical_block_size": 4096, 00:21:48.987 "uuid": "9e7c87de-9c0c-435b-b9b6-c8af46e5821c", 00:21:48.987 "optimal_io_boundary": 0 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "bdev_wait_for_examine" 00:21:48.987 } 00:21:48.987 ] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "nbd", 00:21:48.987 "config": [] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "scheduler", 00:21:48.987 "config": [ 00:21:48.987 { 00:21:48.987 "method": "framework_set_scheduler", 00:21:48.987 "params": { 00:21:48.987 "name": "static" 00:21:48.987 } 00:21:48.987 } 00:21:48.987 ] 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "subsystem": "nvmf", 00:21:48.987 "config": [ 00:21:48.987 { 00:21:48.987 "method": "nvmf_set_config", 00:21:48.987 "params": { 00:21:48.987 "discovery_filter": "match_any", 00:21:48.987 "admin_cmd_passthru": { 00:21:48.987 "identify_ctrlr": false 00:21:48.987 } 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_set_max_subsystems", 00:21:48.987 "params": { 00:21:48.987 "max_subsystems": 1024 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_set_crdt", 00:21:48.987 "params": { 00:21:48.987 "crdt1": 0, 00:21:48.987 "crdt2": 0, 00:21:48.987 "crdt3": 0 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_create_transport", 00:21:48.987 "params": { 00:21:48.987 "trtype": "TCP", 00:21:48.987 "max_queue_depth": 128, 00:21:48.987 "max_io_qpairs_per_ctrlr": 127, 00:21:48.987 "in_capsule_data_size": 4096, 00:21:48.987 "max_io_size": 131072, 00:21:48.987 "io_unit_size": 131072, 00:21:48.987 "max_aq_depth": 128, 00:21:48.987 "num_shared_buffers": 511, 00:21:48.987 "buf_cache_size": 4294967295, 00:21:48.987 "dif_insert_or_strip": false, 00:21:48.987 "zcopy": false, 00:21:48.987 "c2h_success": false, 00:21:48.987 "sock_priority": 0, 00:21:48.987 "abort_timeout_sec": 1, 00:21:48.987 "ack_timeout": 0, 00:21:48.987 "data_wr_pool_size": 0 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_create_subsystem", 00:21:48.987 "params": { 00:21:48.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.987 "allow_any_host": false, 00:21:48.987 "serial_number": "SPDK00000000000001", 00:21:48.987 "model_number": "SPDK bdev Controller", 00:21:48.987 "max_namespaces": 10, 00:21:48.987 "min_cntlid": 1, 00:21:48.987 "max_cntlid": 65519, 00:21:48.987 "ana_reporting": false 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_subsystem_add_host", 00:21:48.987 "params": { 00:21:48.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.987 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.987 "psk": "/tmp/tmp.hJNDrTDA8u" 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_subsystem_add_ns", 00:21:48.987 "params": { 00:21:48.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.987 "namespace": { 00:21:48.987 "nsid": 1, 00:21:48.987 "bdev_name": "malloc0", 00:21:48.987 "nguid": "9E7C87DE9C0C435BB9B6C8AF46E5821C", 00:21:48.987 "uuid": "9e7c87de-9c0c-435b-b9b6-c8af46e5821c", 00:21:48.987 "no_auto_visible": false 00:21:48.987 } 00:21:48.987 } 00:21:48.987 }, 00:21:48.987 { 00:21:48.987 "method": "nvmf_subsystem_add_listener", 00:21:48.987 "params": { 00:21:48.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.987 "listen_address": { 00:21:48.987 "trtype": "TCP", 00:21:48.987 "adrfam": "IPv4", 00:21:48.987 "traddr": "10.0.0.2", 00:21:48.987 "trsvcid": "4420" 00:21:48.987 }, 00:21:48.987 "secure_channel": true 00:21:48.987 } 00:21:48.987 } 00:21:48.987 ] 00:21:48.987 } 00:21:48.987 ] 00:21:48.987 }' 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3140704 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3140704 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3140704 ']' 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:48.987 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.988 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:48.988 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.261 [2024-06-07 16:30:15.856592] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:49.261 [2024-06-07 16:30:15.856651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.261 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.261 [2024-06-07 16:30:15.938978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.261 [2024-06-07 16:30:15.991865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.261 [2024-06-07 16:30:15.991896] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.261 [2024-06-07 16:30:15.991902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.261 [2024-06-07 16:30:15.991906] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.261 [2024-06-07 16:30:15.991914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.261 [2024-06-07 16:30:15.991959] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.530 [2024-06-07 16:30:16.175227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.530 [2024-06-07 16:30:16.191202] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.530 [2024-06-07 16:30:16.207249] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.530 [2024-06-07 16:30:16.220699] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.790 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:49.790 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:49.790 16:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.790 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:49.790 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.051 16:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.051 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3140862 00:21:50.051 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3140862 /var/tmp/bdevperf.sock 00:21:50.051 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:50.051 "subsystems": [ 00:21:50.051 { 00:21:50.051 "subsystem": "keyring", 00:21:50.051 "config": [] 00:21:50.051 }, 00:21:50.051 { 00:21:50.051 "subsystem": "iobuf", 00:21:50.051 "config": [ 00:21:50.051 { 00:21:50.051 "method": "iobuf_set_options", 00:21:50.051 "params": { 00:21:50.051 "small_pool_count": 8192, 00:21:50.051 "large_pool_count": 1024, 00:21:50.051 "small_bufsize": 8192, 00:21:50.051 "large_bufsize": 135168 00:21:50.051 } 00:21:50.051 } 00:21:50.051 ] 00:21:50.051 }, 00:21:50.051 { 00:21:50.051 "subsystem": "sock", 00:21:50.051 "config": [ 00:21:50.051 { 00:21:50.051 "method": "sock_set_default_impl", 00:21:50.051 "params": { 00:21:50.051 "impl_name": "posix" 00:21:50.051 } 00:21:50.051 }, 00:21:50.051 { 00:21:50.051 "method": "sock_impl_set_options", 00:21:50.051 "params": { 00:21:50.051 "impl_name": "ssl", 00:21:50.051 "recv_buf_size": 4096, 00:21:50.051 "send_buf_size": 4096, 00:21:50.051 "enable_recv_pipe": true, 00:21:50.051 "enable_quickack": false, 00:21:50.052 "enable_placement_id": 0, 00:21:50.052 "enable_zerocopy_send_server": true, 00:21:50.052 "enable_zerocopy_send_client": false, 00:21:50.052 "zerocopy_threshold": 0, 00:21:50.052 "tls_version": 0, 00:21:50.052 "enable_ktls": false, 00:21:50.052 "enable_new_session_tickets": true 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "sock_impl_set_options", 00:21:50.052 "params": { 00:21:50.052 "impl_name": "posix", 00:21:50.052 "recv_buf_size": 2097152, 00:21:50.052 "send_buf_size": 2097152, 00:21:50.052 "enable_recv_pipe": true, 00:21:50.052 "enable_quickack": false, 00:21:50.052 "enable_placement_id": 0, 00:21:50.052 "enable_zerocopy_send_server": true, 00:21:50.052 "enable_zerocopy_send_client": false, 00:21:50.052 "zerocopy_threshold": 0, 00:21:50.052 "tls_version": 0, 00:21:50.052 "enable_ktls": false, 00:21:50.052 "enable_new_session_tickets": false 00:21:50.052 } 00:21:50.052 } 00:21:50.052 ] 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "subsystem": "vmd", 00:21:50.052 "config": [] 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "subsystem": "accel", 00:21:50.052 "config": [ 00:21:50.052 { 00:21:50.052 "method": "accel_set_options", 00:21:50.052 "params": { 00:21:50.052 "small_cache_size": 128, 00:21:50.052 "large_cache_size": 16, 00:21:50.052 "task_count": 2048, 00:21:50.052 "sequence_count": 2048, 00:21:50.052 "buf_count": 2048 00:21:50.052 } 00:21:50.052 } 00:21:50.052 ] 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "subsystem": "bdev", 00:21:50.052 "config": [ 00:21:50.052 { 00:21:50.052 "method": "bdev_set_options", 00:21:50.052 "params": { 00:21:50.052 "bdev_io_pool_size": 65535, 00:21:50.052 "bdev_io_cache_size": 256, 00:21:50.052 "bdev_auto_examine": true, 00:21:50.052 "iobuf_small_cache_size": 128, 00:21:50.052 "iobuf_large_cache_size": 16 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "bdev_raid_set_options", 00:21:50.052 "params": { 00:21:50.052 "process_window_size_kb": 1024 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "bdev_iscsi_set_options", 00:21:50.052 "params": { 00:21:50.052 "timeout_sec": 30 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "bdev_nvme_set_options", 00:21:50.052 "params": { 00:21:50.052 "action_on_timeout": "none", 00:21:50.052 "timeout_us": 0, 00:21:50.052 "timeout_admin_us": 0, 00:21:50.052 "keep_alive_timeout_ms": 10000, 00:21:50.052 "arbitration_burst": 0, 00:21:50.052 "low_priority_weight": 0, 00:21:50.052 "medium_priority_weight": 0, 00:21:50.052 "high_priority_weight": 0, 00:21:50.052 "nvme_adminq_poll_period_us": 10000, 00:21:50.052 "nvme_ioq_poll_period_us": 0, 00:21:50.052 "io_queue_requests": 512, 00:21:50.052 "delay_cmd_submit": true, 00:21:50.052 "transport_retry_count": 4, 00:21:50.052 "bdev_retry_count": 3, 00:21:50.052 "transport_ack_timeout": 0, 00:21:50.052 "ctrlr_loss_timeout_sec": 0, 00:21:50.052 "reconnect_delay_sec": 0, 00:21:50.052 "fast_io_fail_timeout_sec": 0, 00:21:50.052 "disable_auto_failback": false, 00:21:50.052 "generate_uuids": false, 00:21:50.052 "transport_tos": 0, 00:21:50.052 "nvme_error_stat": false, 00:21:50.052 "rdma_srq_size": 0, 00:21:50.052 "io_path_stat": false, 00:21:50.052 "allow_accel_sequence": false, 00:21:50.052 "rdma_max_cq_size": 0, 00:21:50.052 "rdma_cm_event_timeout_ms": 0, 00:21:50.052 "dhchap_digests": [ 00:21:50.052 "sha256", 00:21:50.052 "sha384", 00:21:50.052 "sha512" 00:21:50.052 ], 00:21:50.052 "dhchap_dhgroups": [ 00:21:50.052 "null", 00:21:50.052 "ffdhe2048", 00:21:50.052 "ffdhe3072", 00:21:50.052 "ffdhe4096", 00:21:50.052 "ffdhe6144", 00:21:50.052 "ffdhe8192" 00:21:50.052 ] 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "bdev_nvme_attach_controller", 00:21:50.052 "params": { 00:21:50.052 "name": "TLSTEST", 00:21:50.052 "trtype": "TCP", 00:21:50.052 "adrfam": "IPv4", 00:21:50.052 "traddr": "10.0.0.2", 00:21:50.052 "trsvcid": "4420", 00:21:50.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.052 "prchk_reftag": false, 00:21:50.052 "prchk_guard": false, 00:21:50.052 "ctrlr_loss_timeout_sec": 0, 00:21:50.052 "reconnect_delay_sec": 0, 00:21:50.052 "fast_io_fail_timeout_sec": 0, 00:21:50.052 "psk": "/tmp/tmp.hJNDrTDA8u", 00:21:50.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.052 "hdgst": false, 00:21:50.052 "ddgst": false 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "bdev_nvme_set_hotplug", 00:21:50.052 "params": { 00:21:50.052 "period_us": 100000, 00:21:50.052 "enable": false 00:21:50.052 } 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "method": "bdev_wait_for_examine" 00:21:50.052 } 00:21:50.052 ] 00:21:50.052 }, 00:21:50.052 { 00:21:50.052 "subsystem": "nbd", 00:21:50.052 "config": [] 00:21:50.052 } 00:21:50.052 ] 00:21:50.052 }' 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3140862 ']' 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:50.052 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.052 [2024-06-07 16:30:16.698647] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:21:50.052 [2024-06-07 16:30:16.698696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140862 ] 00:21:50.052 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.052 [2024-06-07 16:30:16.748031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.052 [2024-06-07 16:30:16.800557] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.314 [2024-06-07 16:30:16.925014] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.314 [2024-06-07 16:30:16.925080] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:50.883 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:50.883 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:50.883 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.883 Running I/O for 10 seconds... 00:22:00.883 00:22:00.883 Latency(us) 00:22:00.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.883 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.883 Verification LBA range: start 0x0 length 0x2000 00:22:00.883 TLSTESTn1 : 10.05 4192.06 16.38 0.00 0.00 30450.62 4833.28 103109.97 00:22:00.883 =================================================================================================================== 00:22:00.883 Total : 4192.06 16.38 0.00 0.00 30450.62 4833.28 103109.97 00:22:00.883 0 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3140862 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3140862 ']' 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3140862 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3140862 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3140862' 00:22:00.883 killing process with pid 3140862 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3140862 00:22:00.883 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.883 00:22:00.883 Latency(us) 00:22:00.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.883 =================================================================================================================== 00:22:00.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.883 [2024-06-07 16:30:27.705799] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:00.883 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3140862 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3140704 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3140704 ']' 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3140704 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3140704 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3140704' 00:22:01.147 killing process with pid 3140704 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3140704 00:22:01.147 [2024-06-07 16:30:27.872039] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3140704 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3143075 00:22:01.147 16:30:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3143075 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3143075 ']' 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:01.148 16:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.409 [2024-06-07 16:30:28.048034] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:01.409 [2024-06-07 16:30:28.048086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.409 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.409 [2024-06-07 16:30:28.111650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.409 [2024-06-07 16:30:28.175152] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.409 [2024-06-07 16:30:28.175187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.409 [2024-06-07 16:30:28.175194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.409 [2024-06-07 16:30:28.175201] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.409 [2024-06-07 16:30:28.175206] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.409 [2024-06-07 16:30:28.175224] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.980 16:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:01.980 16:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:01.980 16:30:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.980 16:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:01.980 16:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.241 16:30:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.241 16:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.hJNDrTDA8u 00:22:02.241 16:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hJNDrTDA8u 00:22:02.241 16:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.241 [2024-06-07 16:30:28.982314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.241 16:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.502 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:02.502 [2024-06-07 16:30:29.291088] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.502 [2024-06-07 16:30:29.291295] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.502 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:02.762 malloc0 00:22:02.762 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.762 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hJNDrTDA8u 00:22:03.022 [2024-06-07 16:30:29.739056] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3143435 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3143435 /var/tmp/bdevperf.sock 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3143435 ']' 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:03.022 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.022 [2024-06-07 16:30:29.800399] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:03.022 [2024-06-07 16:30:29.800452] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143435 ] 00:22:03.022 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.022 [2024-06-07 16:30:29.875336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.282 [2024-06-07 16:30:29.929036] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.855 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:03.855 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:03.855 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hJNDrTDA8u 00:22:03.855 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:04.116 [2024-06-07 16:30:30.826901] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.116 nvme0n1 00:22:04.116 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.376 Running I/O for 1 seconds... 00:22:05.318 00:22:05.318 Latency(us) 00:22:05.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.318 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:05.318 Verification LBA range: start 0x0 length 0x2000 00:22:05.318 nvme0n1 : 1.06 2276.44 8.89 0.00 0.00 54775.72 6034.77 91313.49 00:22:05.318 =================================================================================================================== 00:22:05.318 Total : 2276.44 8.89 0.00 0.00 54775.72 6034.77 91313.49 00:22:05.318 0 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3143435 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3143435 ']' 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3143435 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3143435 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3143435' 00:22:05.318 killing process with pid 3143435 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3143435 00:22:05.318 Received shutdown signal, test time was about 1.000000 seconds 00:22:05.318 00:22:05.318 Latency(us) 00:22:05.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.318 =================================================================================================================== 00:22:05.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.318 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3143435 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3143075 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3143075 ']' 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3143075 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3143075 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3143075' 00:22:05.579 killing process with pid 3143075 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3143075 00:22:05.579 [2024-06-07 16:30:32.313045] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.579 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3143075 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3144002 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3144002 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3144002 ']' 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:05.840 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.840 [2024-06-07 16:30:32.516197] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:05.840 [2024-06-07 16:30:32.516293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.840 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.840 [2024-06-07 16:30:32.585211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.840 [2024-06-07 16:30:32.650592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.840 [2024-06-07 16:30:32.650628] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.840 [2024-06-07 16:30:32.650636] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.840 [2024-06-07 16:30:32.650642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.840 [2024-06-07 16:30:32.650647] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.840 [2024-06-07 16:30:32.650670] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.782 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:06.782 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:06.782 16:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.782 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:06.782 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.782 16:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.783 [2024-06-07 16:30:33.317364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.783 malloc0 00:22:06.783 [2024-06-07 16:30:33.344087] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.783 [2024-06-07 16:30:33.344289] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3144139 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3144139 /var/tmp/bdevperf.sock 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3144139 ']' 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:06.783 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.783 [2024-06-07 16:30:33.420458] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:06.783 [2024-06-07 16:30:33.420510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144139 ] 00:22:06.783 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.783 [2024-06-07 16:30:33.494343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.783 [2024-06-07 16:30:33.548200] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.404 16:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:07.404 16:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:07.404 16:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hJNDrTDA8u 00:22:07.665 16:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:07.665 [2024-06-07 16:30:34.462100] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.927 nvme0n1 00:22:07.927 16:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.927 Running I/O for 1 seconds... 00:22:08.869 00:22:08.869 Latency(us) 00:22:08.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.869 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:08.869 Verification LBA range: start 0x0 length 0x2000 00:22:08.869 nvme0n1 : 1.06 2487.45 9.72 0.00 0.00 50137.57 6089.39 70778.88 00:22:08.869 =================================================================================================================== 00:22:08.869 Total : 2487.45 9.72 0.00 0.00 50137.57 6089.39 70778.88 00:22:08.869 0 00:22:09.131 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:09.131 16:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:09.131 16:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.131 16:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:09.131 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:09.131 "subsystems": [ 00:22:09.131 { 00:22:09.131 "subsystem": "keyring", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "keyring_file_add_key", 00:22:09.131 "params": { 00:22:09.131 "name": "key0", 00:22:09.131 "path": "/tmp/tmp.hJNDrTDA8u" 00:22:09.131 } 00:22:09.131 } 00:22:09.131 ] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "iobuf", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "iobuf_set_options", 00:22:09.131 "params": { 00:22:09.131 "small_pool_count": 8192, 00:22:09.131 "large_pool_count": 1024, 00:22:09.131 "small_bufsize": 8192, 00:22:09.131 "large_bufsize": 135168 00:22:09.131 } 00:22:09.131 } 00:22:09.131 ] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "sock", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "sock_set_default_impl", 00:22:09.131 "params": { 00:22:09.131 "impl_name": "posix" 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "sock_impl_set_options", 00:22:09.131 "params": { 00:22:09.131 "impl_name": "ssl", 00:22:09.131 "recv_buf_size": 4096, 00:22:09.131 "send_buf_size": 4096, 00:22:09.131 "enable_recv_pipe": true, 00:22:09.131 "enable_quickack": false, 00:22:09.131 "enable_placement_id": 0, 00:22:09.131 "enable_zerocopy_send_server": true, 00:22:09.131 "enable_zerocopy_send_client": false, 00:22:09.131 "zerocopy_threshold": 0, 00:22:09.131 "tls_version": 0, 00:22:09.131 "enable_ktls": false, 00:22:09.131 "enable_new_session_tickets": true 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "sock_impl_set_options", 00:22:09.131 "params": { 00:22:09.131 "impl_name": "posix", 00:22:09.131 "recv_buf_size": 2097152, 00:22:09.131 "send_buf_size": 2097152, 00:22:09.131 "enable_recv_pipe": true, 00:22:09.131 "enable_quickack": false, 00:22:09.131 "enable_placement_id": 0, 00:22:09.131 "enable_zerocopy_send_server": true, 00:22:09.131 "enable_zerocopy_send_client": false, 00:22:09.131 "zerocopy_threshold": 0, 00:22:09.131 "tls_version": 0, 00:22:09.131 "enable_ktls": false, 00:22:09.131 "enable_new_session_tickets": false 00:22:09.131 } 00:22:09.131 } 00:22:09.131 ] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "vmd", 00:22:09.131 "config": [] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "accel", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "accel_set_options", 00:22:09.131 "params": { 00:22:09.131 "small_cache_size": 128, 00:22:09.131 "large_cache_size": 16, 00:22:09.131 "task_count": 2048, 00:22:09.131 "sequence_count": 2048, 00:22:09.131 "buf_count": 2048 00:22:09.131 } 00:22:09.131 } 00:22:09.131 ] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "bdev", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "bdev_set_options", 00:22:09.131 "params": { 00:22:09.131 "bdev_io_pool_size": 65535, 00:22:09.131 "bdev_io_cache_size": 256, 00:22:09.131 "bdev_auto_examine": true, 00:22:09.131 "iobuf_small_cache_size": 128, 00:22:09.131 "iobuf_large_cache_size": 16 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "bdev_raid_set_options", 00:22:09.131 "params": { 00:22:09.131 "process_window_size_kb": 1024 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "bdev_iscsi_set_options", 00:22:09.131 "params": { 00:22:09.131 "timeout_sec": 30 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "bdev_nvme_set_options", 00:22:09.131 "params": { 00:22:09.131 "action_on_timeout": "none", 00:22:09.131 "timeout_us": 0, 00:22:09.131 "timeout_admin_us": 0, 00:22:09.131 "keep_alive_timeout_ms": 10000, 00:22:09.131 "arbitration_burst": 0, 00:22:09.131 "low_priority_weight": 0, 00:22:09.131 "medium_priority_weight": 0, 00:22:09.131 "high_priority_weight": 0, 00:22:09.131 "nvme_adminq_poll_period_us": 10000, 00:22:09.131 "nvme_ioq_poll_period_us": 0, 00:22:09.131 "io_queue_requests": 0, 00:22:09.131 "delay_cmd_submit": true, 00:22:09.131 "transport_retry_count": 4, 00:22:09.131 "bdev_retry_count": 3, 00:22:09.131 "transport_ack_timeout": 0, 00:22:09.131 "ctrlr_loss_timeout_sec": 0, 00:22:09.131 "reconnect_delay_sec": 0, 00:22:09.131 "fast_io_fail_timeout_sec": 0, 00:22:09.131 "disable_auto_failback": false, 00:22:09.131 "generate_uuids": false, 00:22:09.131 "transport_tos": 0, 00:22:09.131 "nvme_error_stat": false, 00:22:09.131 "rdma_srq_size": 0, 00:22:09.131 "io_path_stat": false, 00:22:09.131 "allow_accel_sequence": false, 00:22:09.131 "rdma_max_cq_size": 0, 00:22:09.131 "rdma_cm_event_timeout_ms": 0, 00:22:09.131 "dhchap_digests": [ 00:22:09.131 "sha256", 00:22:09.131 "sha384", 00:22:09.131 "sha512" 00:22:09.131 ], 00:22:09.131 "dhchap_dhgroups": [ 00:22:09.131 "null", 00:22:09.131 "ffdhe2048", 00:22:09.131 "ffdhe3072", 00:22:09.131 "ffdhe4096", 00:22:09.131 "ffdhe6144", 00:22:09.131 "ffdhe8192" 00:22:09.131 ] 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "bdev_nvme_set_hotplug", 00:22:09.131 "params": { 00:22:09.131 "period_us": 100000, 00:22:09.131 "enable": false 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "bdev_malloc_create", 00:22:09.131 "params": { 00:22:09.131 "name": "malloc0", 00:22:09.131 "num_blocks": 8192, 00:22:09.131 "block_size": 4096, 00:22:09.131 "physical_block_size": 4096, 00:22:09.131 "uuid": "5f5b9454-98c9-4740-be57-6ac67185f100", 00:22:09.131 "optimal_io_boundary": 0 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "bdev_wait_for_examine" 00:22:09.131 } 00:22:09.131 ] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "nbd", 00:22:09.131 "config": [] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "scheduler", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "framework_set_scheduler", 00:22:09.131 "params": { 00:22:09.131 "name": "static" 00:22:09.131 } 00:22:09.131 } 00:22:09.131 ] 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "subsystem": "nvmf", 00:22:09.131 "config": [ 00:22:09.131 { 00:22:09.131 "method": "nvmf_set_config", 00:22:09.131 "params": { 00:22:09.131 "discovery_filter": "match_any", 00:22:09.131 "admin_cmd_passthru": { 00:22:09.131 "identify_ctrlr": false 00:22:09.131 } 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "nvmf_set_max_subsystems", 00:22:09.131 "params": { 00:22:09.131 "max_subsystems": 1024 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "nvmf_set_crdt", 00:22:09.131 "params": { 00:22:09.131 "crdt1": 0, 00:22:09.131 "crdt2": 0, 00:22:09.131 "crdt3": 0 00:22:09.131 } 00:22:09.131 }, 00:22:09.131 { 00:22:09.131 "method": "nvmf_create_transport", 00:22:09.131 "params": { 00:22:09.131 "trtype": "TCP", 00:22:09.131 "max_queue_depth": 128, 00:22:09.131 "max_io_qpairs_per_ctrlr": 127, 00:22:09.131 "in_capsule_data_size": 4096, 00:22:09.131 "max_io_size": 131072, 00:22:09.131 "io_unit_size": 131072, 00:22:09.131 "max_aq_depth": 128, 00:22:09.131 "num_shared_buffers": 511, 00:22:09.131 "buf_cache_size": 4294967295, 00:22:09.131 "dif_insert_or_strip": false, 00:22:09.131 "zcopy": false, 00:22:09.131 "c2h_success": false, 00:22:09.131 "sock_priority": 0, 00:22:09.131 "abort_timeout_sec": 1, 00:22:09.132 "ack_timeout": 0, 00:22:09.132 "data_wr_pool_size": 0 00:22:09.132 } 00:22:09.132 }, 00:22:09.132 { 00:22:09.132 "method": "nvmf_create_subsystem", 00:22:09.132 "params": { 00:22:09.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.132 "allow_any_host": false, 00:22:09.132 "serial_number": "00000000000000000000", 00:22:09.132 "model_number": "SPDK bdev Controller", 00:22:09.132 "max_namespaces": 32, 00:22:09.132 "min_cntlid": 1, 00:22:09.132 "max_cntlid": 65519, 00:22:09.132 "ana_reporting": false 00:22:09.132 } 00:22:09.132 }, 00:22:09.132 { 00:22:09.132 "method": "nvmf_subsystem_add_host", 00:22:09.132 "params": { 00:22:09.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.132 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.132 "psk": "key0" 00:22:09.132 } 00:22:09.132 }, 00:22:09.132 { 00:22:09.132 "method": "nvmf_subsystem_add_ns", 00:22:09.132 "params": { 00:22:09.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.132 "namespace": { 00:22:09.132 "nsid": 1, 00:22:09.132 "bdev_name": "malloc0", 00:22:09.132 "nguid": "5F5B945498C94740BE576AC67185F100", 00:22:09.132 "uuid": "5f5b9454-98c9-4740-be57-6ac67185f100", 00:22:09.132 "no_auto_visible": false 00:22:09.132 } 00:22:09.132 } 00:22:09.132 }, 00:22:09.132 { 00:22:09.132 "method": "nvmf_subsystem_add_listener", 00:22:09.132 "params": { 00:22:09.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.132 "listen_address": { 00:22:09.132 "trtype": "TCP", 00:22:09.132 "adrfam": "IPv4", 00:22:09.132 "traddr": "10.0.0.2", 00:22:09.132 "trsvcid": "4420" 00:22:09.132 }, 00:22:09.132 "secure_channel": true 00:22:09.132 } 00:22:09.132 } 00:22:09.132 ] 00:22:09.132 } 00:22:09.132 ] 00:22:09.132 }' 00:22:09.132 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:09.393 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:09.393 "subsystems": [ 00:22:09.393 { 00:22:09.393 "subsystem": "keyring", 00:22:09.393 "config": [ 00:22:09.393 { 00:22:09.393 "method": "keyring_file_add_key", 00:22:09.393 "params": { 00:22:09.393 "name": "key0", 00:22:09.393 "path": "/tmp/tmp.hJNDrTDA8u" 00:22:09.393 } 00:22:09.393 } 00:22:09.393 ] 00:22:09.393 }, 00:22:09.393 { 00:22:09.393 "subsystem": "iobuf", 00:22:09.393 "config": [ 00:22:09.393 { 00:22:09.393 "method": "iobuf_set_options", 00:22:09.393 "params": { 00:22:09.393 "small_pool_count": 8192, 00:22:09.393 "large_pool_count": 1024, 00:22:09.393 "small_bufsize": 8192, 00:22:09.393 "large_bufsize": 135168 00:22:09.393 } 00:22:09.393 } 00:22:09.393 ] 00:22:09.393 }, 00:22:09.393 { 00:22:09.393 "subsystem": "sock", 00:22:09.393 "config": [ 00:22:09.393 { 00:22:09.393 "method": "sock_set_default_impl", 00:22:09.393 "params": { 00:22:09.393 "impl_name": "posix" 00:22:09.393 } 00:22:09.393 }, 00:22:09.393 { 00:22:09.393 "method": "sock_impl_set_options", 00:22:09.393 "params": { 00:22:09.393 "impl_name": "ssl", 00:22:09.393 "recv_buf_size": 4096, 00:22:09.393 "send_buf_size": 4096, 00:22:09.393 "enable_recv_pipe": true, 00:22:09.393 "enable_quickack": false, 00:22:09.393 "enable_placement_id": 0, 00:22:09.393 "enable_zerocopy_send_server": true, 00:22:09.393 "enable_zerocopy_send_client": false, 00:22:09.393 "zerocopy_threshold": 0, 00:22:09.393 "tls_version": 0, 00:22:09.393 "enable_ktls": false, 00:22:09.393 "enable_new_session_tickets": true 00:22:09.393 } 00:22:09.393 }, 00:22:09.393 { 00:22:09.393 "method": "sock_impl_set_options", 00:22:09.393 "params": { 00:22:09.393 "impl_name": "posix", 00:22:09.393 "recv_buf_size": 2097152, 00:22:09.393 "send_buf_size": 2097152, 00:22:09.394 "enable_recv_pipe": true, 00:22:09.394 "enable_quickack": false, 00:22:09.394 "enable_placement_id": 0, 00:22:09.394 "enable_zerocopy_send_server": true, 00:22:09.394 "enable_zerocopy_send_client": false, 00:22:09.394 "zerocopy_threshold": 0, 00:22:09.394 "tls_version": 0, 00:22:09.394 "enable_ktls": false, 00:22:09.394 "enable_new_session_tickets": false 00:22:09.394 } 00:22:09.394 } 00:22:09.394 ] 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "subsystem": "vmd", 00:22:09.394 "config": [] 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "subsystem": "accel", 00:22:09.394 "config": [ 00:22:09.394 { 00:22:09.394 "method": "accel_set_options", 00:22:09.394 "params": { 00:22:09.394 "small_cache_size": 128, 00:22:09.394 "large_cache_size": 16, 00:22:09.394 "task_count": 2048, 00:22:09.394 "sequence_count": 2048, 00:22:09.394 "buf_count": 2048 00:22:09.394 } 00:22:09.394 } 00:22:09.394 ] 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "subsystem": "bdev", 00:22:09.394 "config": [ 00:22:09.394 { 00:22:09.394 "method": "bdev_set_options", 00:22:09.394 "params": { 00:22:09.394 "bdev_io_pool_size": 65535, 00:22:09.394 "bdev_io_cache_size": 256, 00:22:09.394 "bdev_auto_examine": true, 00:22:09.394 "iobuf_small_cache_size": 128, 00:22:09.394 "iobuf_large_cache_size": 16 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_raid_set_options", 00:22:09.394 "params": { 00:22:09.394 "process_window_size_kb": 1024 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_iscsi_set_options", 00:22:09.394 "params": { 00:22:09.394 "timeout_sec": 30 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_nvme_set_options", 00:22:09.394 "params": { 00:22:09.394 "action_on_timeout": "none", 00:22:09.394 "timeout_us": 0, 00:22:09.394 "timeout_admin_us": 0, 00:22:09.394 "keep_alive_timeout_ms": 10000, 00:22:09.394 "arbitration_burst": 0, 00:22:09.394 "low_priority_weight": 0, 00:22:09.394 "medium_priority_weight": 0, 00:22:09.394 "high_priority_weight": 0, 00:22:09.394 "nvme_adminq_poll_period_us": 10000, 00:22:09.394 "nvme_ioq_poll_period_us": 0, 00:22:09.394 "io_queue_requests": 512, 00:22:09.394 "delay_cmd_submit": true, 00:22:09.394 "transport_retry_count": 4, 00:22:09.394 "bdev_retry_count": 3, 00:22:09.394 "transport_ack_timeout": 0, 00:22:09.394 "ctrlr_loss_timeout_sec": 0, 00:22:09.394 "reconnect_delay_sec": 0, 00:22:09.394 "fast_io_fail_timeout_sec": 0, 00:22:09.394 "disable_auto_failback": false, 00:22:09.394 "generate_uuids": false, 00:22:09.394 "transport_tos": 0, 00:22:09.394 "nvme_error_stat": false, 00:22:09.394 "rdma_srq_size": 0, 00:22:09.394 "io_path_stat": false, 00:22:09.394 "allow_accel_sequence": false, 00:22:09.394 "rdma_max_cq_size": 0, 00:22:09.394 "rdma_cm_event_timeout_ms": 0, 00:22:09.394 "dhchap_digests": [ 00:22:09.394 "sha256", 00:22:09.394 "sha384", 00:22:09.394 "sha512" 00:22:09.394 ], 00:22:09.394 "dhchap_dhgroups": [ 00:22:09.394 "null", 00:22:09.394 "ffdhe2048", 00:22:09.394 "ffdhe3072", 00:22:09.394 "ffdhe4096", 00:22:09.394 "ffdhe6144", 00:22:09.394 "ffdhe8192" 00:22:09.394 ] 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_nvme_attach_controller", 00:22:09.394 "params": { 00:22:09.394 "name": "nvme0", 00:22:09.394 "trtype": "TCP", 00:22:09.394 "adrfam": "IPv4", 00:22:09.394 "traddr": "10.0.0.2", 00:22:09.394 "trsvcid": "4420", 00:22:09.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.394 "prchk_reftag": false, 00:22:09.394 "prchk_guard": false, 00:22:09.394 "ctrlr_loss_timeout_sec": 0, 00:22:09.394 "reconnect_delay_sec": 0, 00:22:09.394 "fast_io_fail_timeout_sec": 0, 00:22:09.394 "psk": "key0", 00:22:09.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.394 "hdgst": false, 00:22:09.394 "ddgst": false 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_nvme_set_hotplug", 00:22:09.394 "params": { 00:22:09.394 "period_us": 100000, 00:22:09.394 "enable": false 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_enable_histogram", 00:22:09.394 "params": { 00:22:09.394 "name": "nvme0n1", 00:22:09.394 "enable": true 00:22:09.394 } 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "method": "bdev_wait_for_examine" 00:22:09.394 } 00:22:09.394 ] 00:22:09.394 }, 00:22:09.394 { 00:22:09.394 "subsystem": "nbd", 00:22:09.394 "config": [] 00:22:09.394 } 00:22:09.394 ] 00:22:09.394 }' 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3144139 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3144139 ']' 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3144139 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3144139 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3144139' 00:22:09.394 killing process with pid 3144139 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3144139 00:22:09.394 Received shutdown signal, test time was about 1.000000 seconds 00:22:09.394 00:22:09.394 Latency(us) 00:22:09.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.394 =================================================================================================================== 00:22:09.394 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3144139 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3144002 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3144002 ']' 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3144002 00:22:09.394 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3144002 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3144002' 00:22:09.656 killing process with pid 3144002 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3144002 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3144002 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.656 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:09.656 "subsystems": [ 00:22:09.656 { 00:22:09.656 "subsystem": "keyring", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "keyring_file_add_key", 00:22:09.656 "params": { 00:22:09.656 "name": "key0", 00:22:09.656 "path": "/tmp/tmp.hJNDrTDA8u" 00:22:09.656 } 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "iobuf", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "iobuf_set_options", 00:22:09.656 "params": { 00:22:09.656 "small_pool_count": 8192, 00:22:09.656 "large_pool_count": 1024, 00:22:09.656 "small_bufsize": 8192, 00:22:09.656 "large_bufsize": 135168 00:22:09.656 } 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "sock", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "sock_set_default_impl", 00:22:09.656 "params": { 00:22:09.656 "impl_name": "posix" 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "sock_impl_set_options", 00:22:09.656 "params": { 00:22:09.656 "impl_name": "ssl", 00:22:09.656 "recv_buf_size": 4096, 00:22:09.656 "send_buf_size": 4096, 00:22:09.656 "enable_recv_pipe": true, 00:22:09.656 "enable_quickack": false, 00:22:09.656 "enable_placement_id": 0, 00:22:09.656 "enable_zerocopy_send_server": true, 00:22:09.656 "enable_zerocopy_send_client": false, 00:22:09.656 "zerocopy_threshold": 0, 00:22:09.656 "tls_version": 0, 00:22:09.656 "enable_ktls": false, 00:22:09.656 "enable_new_session_tickets": true 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "sock_impl_set_options", 00:22:09.656 "params": { 00:22:09.656 "impl_name": "posix", 00:22:09.656 "recv_buf_size": 2097152, 00:22:09.656 "send_buf_size": 2097152, 00:22:09.656 "enable_recv_pipe": true, 00:22:09.656 "enable_quickack": false, 00:22:09.656 "enable_placement_id": 0, 00:22:09.656 "enable_zerocopy_send_server": true, 00:22:09.656 "enable_zerocopy_send_client": false, 00:22:09.656 "zerocopy_threshold": 0, 00:22:09.656 "tls_version": 0, 00:22:09.656 "enable_ktls": false, 00:22:09.656 "enable_new_session_tickets": false 00:22:09.656 } 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "vmd", 00:22:09.656 "config": [] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "accel", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "accel_set_options", 00:22:09.656 "params": { 00:22:09.656 "small_cache_size": 128, 00:22:09.656 "large_cache_size": 16, 00:22:09.656 "task_count": 2048, 00:22:09.656 "sequence_count": 2048, 00:22:09.656 "buf_count": 2048 00:22:09.656 } 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "bdev", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "bdev_set_options", 00:22:09.656 "params": { 00:22:09.656 "bdev_io_pool_size": 65535, 00:22:09.656 "bdev_io_cache_size": 256, 00:22:09.656 "bdev_auto_examine": true, 00:22:09.656 "iobuf_small_cache_size": 128, 00:22:09.656 "iobuf_large_cache_size": 16 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "bdev_raid_set_options", 00:22:09.656 "params": { 00:22:09.656 "process_window_size_kb": 1024 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "bdev_iscsi_set_options", 00:22:09.656 "params": { 00:22:09.656 "timeout_sec": 30 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "bdev_nvme_set_options", 00:22:09.656 "params": { 00:22:09.656 "action_on_timeout": "none", 00:22:09.656 "timeout_us": 0, 00:22:09.656 "timeout_admin_us": 0, 00:22:09.656 "keep_alive_timeout_ms": 10000, 00:22:09.656 "arbitration_burst": 0, 00:22:09.656 "low_priority_weight": 0, 00:22:09.656 "medium_priority_weight": 0, 00:22:09.656 "high_priority_weight": 0, 00:22:09.656 "nvme_adminq_poll_period_us": 10000, 00:22:09.656 "nvme_ioq_poll_period_us": 0, 00:22:09.656 "io_queue_requests": 0, 00:22:09.656 "delay_cmd_submit": true, 00:22:09.656 "transport_retry_count": 4, 00:22:09.656 "bdev_retry_count": 3, 00:22:09.656 "transport_ack_timeout": 0, 00:22:09.656 "ctrlr_loss_timeout_sec": 0, 00:22:09.656 "reconnect_delay_sec": 0, 00:22:09.656 "fast_io_fail_timeout_sec": 0, 00:22:09.656 "disable_auto_failback": false, 00:22:09.656 "generate_uuids": false, 00:22:09.656 "transport_tos": 0, 00:22:09.656 "nvme_error_stat": false, 00:22:09.656 "rdma_srq_size": 0, 00:22:09.656 "io_path_stat": false, 00:22:09.656 "allow_accel_sequence": false, 00:22:09.656 "rdma_max_cq_size": 0, 00:22:09.656 "rdma_cm_event_timeout_ms": 0, 00:22:09.656 "dhchap_digests": [ 00:22:09.656 "sha256", 00:22:09.656 "sha384", 00:22:09.656 "sha512" 00:22:09.656 ], 00:22:09.656 "dhchap_dhgroups": [ 00:22:09.656 "null", 00:22:09.656 "ffdhe2048", 00:22:09.656 "ffdhe3072", 00:22:09.656 "ffdhe4096", 00:22:09.656 "ffdhe6144", 00:22:09.656 "ffdhe8192" 00:22:09.656 ] 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "bdev_nvme_set_hotplug", 00:22:09.656 "params": { 00:22:09.656 "period_us": 100000, 00:22:09.656 "enable": false 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "bdev_malloc_create", 00:22:09.656 "params": { 00:22:09.656 "name": "malloc0", 00:22:09.656 "num_blocks": 8192, 00:22:09.656 "block_size": 4096, 00:22:09.656 "physical_block_size": 4096, 00:22:09.656 "uuid": "5f5b9454-98c9-4740-be57-6ac67185f100", 00:22:09.656 "optimal_io_boundary": 0 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "bdev_wait_for_examine" 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "nbd", 00:22:09.656 "config": [] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "scheduler", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "framework_set_scheduler", 00:22:09.656 "params": { 00:22:09.656 "name": "static" 00:22:09.656 } 00:22:09.656 } 00:22:09.656 ] 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "subsystem": "nvmf", 00:22:09.656 "config": [ 00:22:09.656 { 00:22:09.656 "method": "nvmf_set_config", 00:22:09.656 "params": { 00:22:09.656 "discovery_filter": "match_any", 00:22:09.656 "admin_cmd_passthru": { 00:22:09.656 "identify_ctrlr": false 00:22:09.656 } 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "nvmf_set_max_subsystems", 00:22:09.656 "params": { 00:22:09.656 "max_subsystems": 1024 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "nvmf_set_crdt", 00:22:09.656 "params": { 00:22:09.656 "crdt1": 0, 00:22:09.656 "crdt2": 0, 00:22:09.656 "crdt3": 0 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "nvmf_create_transport", 00:22:09.656 "params": { 00:22:09.656 "trtype": "TCP", 00:22:09.656 "max_queue_depth": 128, 00:22:09.656 "max_io_qpairs_per_ctrlr": 127, 00:22:09.656 "in_capsule_data_size": 4096, 00:22:09.656 "max_io_size": 131072, 00:22:09.656 "io_unit_size": 131072, 00:22:09.656 "max_aq_depth": 128, 00:22:09.656 "num_shared_buffers": 511, 00:22:09.656 "buf_cache_size": 4294967295, 00:22:09.656 "dif_insert_or_strip": false, 00:22:09.656 "zcopy": false, 00:22:09.656 "c2h_success": false, 00:22:09.656 "sock_priority": 0, 00:22:09.656 "abort_timeout_sec": 1, 00:22:09.656 "ack_timeout": 0, 00:22:09.656 "data_wr_pool_size": 0 00:22:09.656 } 00:22:09.656 }, 00:22:09.656 { 00:22:09.656 "method": "nvmf_create_subsystem", 00:22:09.656 "params": { 00:22:09.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.656 "allow_any_host": false, 00:22:09.656 "serial_number": "00000000000000000000", 00:22:09.656 "model_number": "SPDK bdev Controller", 00:22:09.657 "max_namespaces": 32, 00:22:09.657 "min_cntlid": 1, 00:22:09.657 "max_cntlid": 65519, 00:22:09.657 "ana_reporting": false 00:22:09.657 } 00:22:09.657 }, 00:22:09.657 { 00:22:09.657 "method": "nvmf_subsystem_add_host", 00:22:09.657 "params": { 00:22:09.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.657 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.657 "psk": "key0" 00:22:09.657 } 00:22:09.657 }, 00:22:09.657 { 00:22:09.657 "method": "nvmf_subsystem_add_ns", 00:22:09.657 "params": { 00:22:09.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.657 "namespace": { 00:22:09.657 "nsid": 1, 00:22:09.657 "bdev_name": "malloc0", 00:22:09.657 "nguid": "5F5B945498C94740BE576AC67185F100", 00:22:09.657 "uuid": "5f5b9454-98c9-4740-be57-6ac67185f100", 00:22:09.657 "no_auto_visible": false 00:22:09.657 } 00:22:09.657 } 00:22:09.657 }, 00:22:09.657 { 00:22:09.657 "method": "nvmf_subsystem_add_listener", 00:22:09.657 "params": { 00:22:09.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.657 "listen_address": { 00:22:09.657 "trtype": "TCP", 00:22:09.657 "adrfam": "IPv4", 00:22:09.657 "traddr": "10.0.0.2", 00:22:09.657 "trsvcid": "4420" 00:22:09.657 }, 00:22:09.657 "secure_channel": true 00:22:09.657 } 00:22:09.657 } 00:22:09.657 ] 00:22:09.657 } 00:22:09.657 ] 00:22:09.657 }' 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3144825 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3144825 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3144825 ']' 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:09.657 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.657 [2024-06-07 16:30:36.493787] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:09.657 [2024-06-07 16:30:36.493839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.918 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.918 [2024-06-07 16:30:36.558188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.918 [2024-06-07 16:30:36.622801] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.918 [2024-06-07 16:30:36.622835] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.918 [2024-06-07 16:30:36.622842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.918 [2024-06-07 16:30:36.622849] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.918 [2024-06-07 16:30:36.622854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.918 [2024-06-07 16:30:36.622906] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.179 [2024-06-07 16:30:36.820214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.179 [2024-06-07 16:30:36.852222] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.179 [2024-06-07 16:30:36.869702] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3144900 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3144900 /var/tmp/bdevperf.sock 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3144900 ']' 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.473 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:10.473 "subsystems": [ 00:22:10.473 { 00:22:10.473 "subsystem": "keyring", 00:22:10.473 "config": [ 00:22:10.473 { 00:22:10.473 "method": "keyring_file_add_key", 00:22:10.473 "params": { 00:22:10.473 "name": "key0", 00:22:10.473 "path": "/tmp/tmp.hJNDrTDA8u" 00:22:10.473 } 00:22:10.473 } 00:22:10.473 ] 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "subsystem": "iobuf", 00:22:10.473 "config": [ 00:22:10.473 { 00:22:10.473 "method": "iobuf_set_options", 00:22:10.473 "params": { 00:22:10.473 "small_pool_count": 8192, 00:22:10.473 "large_pool_count": 1024, 00:22:10.473 "small_bufsize": 8192, 00:22:10.473 "large_bufsize": 135168 00:22:10.473 } 00:22:10.473 } 00:22:10.473 ] 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "subsystem": "sock", 00:22:10.473 "config": [ 00:22:10.473 { 00:22:10.473 "method": "sock_set_default_impl", 00:22:10.473 "params": { 00:22:10.473 "impl_name": "posix" 00:22:10.473 } 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "method": "sock_impl_set_options", 00:22:10.473 "params": { 00:22:10.473 "impl_name": "ssl", 00:22:10.473 "recv_buf_size": 4096, 00:22:10.473 "send_buf_size": 4096, 00:22:10.473 "enable_recv_pipe": true, 00:22:10.473 "enable_quickack": false, 00:22:10.473 "enable_placement_id": 0, 00:22:10.473 "enable_zerocopy_send_server": true, 00:22:10.473 "enable_zerocopy_send_client": false, 00:22:10.473 "zerocopy_threshold": 0, 00:22:10.473 "tls_version": 0, 00:22:10.473 "enable_ktls": false, 00:22:10.473 "enable_new_session_tickets": true 00:22:10.473 } 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "method": "sock_impl_set_options", 00:22:10.473 "params": { 00:22:10.473 "impl_name": "posix", 00:22:10.473 "recv_buf_size": 2097152, 00:22:10.473 "send_buf_size": 2097152, 00:22:10.473 "enable_recv_pipe": true, 00:22:10.473 "enable_quickack": false, 00:22:10.473 "enable_placement_id": 0, 00:22:10.473 "enable_zerocopy_send_server": true, 00:22:10.473 "enable_zerocopy_send_client": false, 00:22:10.473 "zerocopy_threshold": 0, 00:22:10.473 "tls_version": 0, 00:22:10.473 "enable_ktls": false, 00:22:10.473 "enable_new_session_tickets": false 00:22:10.473 } 00:22:10.473 } 00:22:10.473 ] 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "subsystem": "vmd", 00:22:10.473 "config": [] 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "subsystem": "accel", 00:22:10.473 "config": [ 00:22:10.473 { 00:22:10.473 "method": "accel_set_options", 00:22:10.473 "params": { 00:22:10.473 "small_cache_size": 128, 00:22:10.473 "large_cache_size": 16, 00:22:10.473 "task_count": 2048, 00:22:10.473 "sequence_count": 2048, 00:22:10.473 "buf_count": 2048 00:22:10.473 } 00:22:10.473 } 00:22:10.473 ] 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "subsystem": "bdev", 00:22:10.473 "config": [ 00:22:10.473 { 00:22:10.473 "method": "bdev_set_options", 00:22:10.473 "params": { 00:22:10.473 "bdev_io_pool_size": 65535, 00:22:10.473 "bdev_io_cache_size": 256, 00:22:10.473 "bdev_auto_examine": true, 00:22:10.473 "iobuf_small_cache_size": 128, 00:22:10.473 "iobuf_large_cache_size": 16 00:22:10.473 } 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "method": "bdev_raid_set_options", 00:22:10.473 "params": { 00:22:10.473 "process_window_size_kb": 1024 00:22:10.473 } 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "method": "bdev_iscsi_set_options", 00:22:10.473 "params": { 00:22:10.473 "timeout_sec": 30 00:22:10.473 } 00:22:10.473 }, 00:22:10.473 { 00:22:10.473 "method": "bdev_nvme_set_options", 00:22:10.473 "params": { 00:22:10.473 "action_on_timeout": "none", 00:22:10.473 "timeout_us": 0, 00:22:10.473 "timeout_admin_us": 0, 00:22:10.473 "keep_alive_timeout_ms": 10000, 00:22:10.473 "arbitration_burst": 0, 00:22:10.473 "low_priority_weight": 0, 00:22:10.473 "medium_priority_weight": 0, 00:22:10.473 "high_priority_weight": 0, 00:22:10.473 "nvme_adminq_poll_period_us": 10000, 00:22:10.473 "nvme_ioq_poll_period_us": 0, 00:22:10.473 "io_queue_requests": 512, 00:22:10.473 "delay_cmd_submit": true, 00:22:10.474 "transport_retry_count": 4, 00:22:10.474 "bdev_retry_count": 3, 00:22:10.474 "transport_ack_timeout": 0, 00:22:10.474 "ctrlr_loss_timeout_sec": 0, 00:22:10.474 "reconnect_delay_sec": 0, 00:22:10.474 "fast_io_fail_timeout_sec": 0, 00:22:10.474 "disable_auto_failback": false, 00:22:10.474 "generate_uuids": false, 00:22:10.474 "transport_tos": 0, 00:22:10.474 "nvme_error_stat": false, 00:22:10.474 "rdma_srq_size": 0, 00:22:10.474 "io_path_stat": false, 00:22:10.474 "allow_accel_sequence": false, 00:22:10.474 "rdma_max_cq_size": 0, 00:22:10.474 "rdma_cm_event_timeout_ms": 0, 00:22:10.474 "dhchap_digests": [ 00:22:10.474 "sha256", 00:22:10.474 "sha384", 00:22:10.474 "sha512" 00:22:10.474 ], 00:22:10.474 "dhchap_dhgroups": [ 00:22:10.474 "null", 00:22:10.474 "ffdhe2048", 00:22:10.474 "ffdhe3072", 00:22:10.474 "ffdhe4096", 00:22:10.474 "ffdhe6144", 00:22:10.474 "ffdhe8192" 00:22:10.474 ] 00:22:10.474 } 00:22:10.474 }, 00:22:10.474 { 00:22:10.474 "method": "bdev_nvme_attach_controller", 00:22:10.474 "params": { 00:22:10.474 "name": "nvme0", 00:22:10.474 "trtype": "TCP", 00:22:10.474 "adrfam": "IPv4", 00:22:10.474 "traddr": "10.0.0.2", 00:22:10.474 "trsvcid": "4420", 00:22:10.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.474 "prchk_reftag": false, 00:22:10.474 "prchk_guard": false, 00:22:10.474 "ctrlr_loss_timeout_sec": 0, 00:22:10.474 "reconnect_delay_sec": 0, 00:22:10.474 "fast_io_fail_timeout_sec": 0, 00:22:10.474 "psk": "key0", 00:22:10.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.474 "hdgst": false, 00:22:10.474 "ddgst": false 00:22:10.474 } 00:22:10.474 }, 00:22:10.474 { 00:22:10.474 "method": "bdev_nvme_set_hotplug", 00:22:10.474 "params": { 00:22:10.474 "period_us": 100000, 00:22:10.474 "enable": false 00:22:10.474 } 00:22:10.474 }, 00:22:10.474 { 00:22:10.474 "method": "bdev_enable_histogram", 00:22:10.474 "params": { 00:22:10.474 "name": "nvme0n1", 00:22:10.474 "enable": true 00:22:10.474 } 00:22:10.474 }, 00:22:10.474 { 00:22:10.474 "method": "bdev_wait_for_examine" 00:22:10.474 } 00:22:10.474 ] 00:22:10.474 }, 00:22:10.474 { 00:22:10.474 "subsystem": "nbd", 00:22:10.474 "config": [] 00:22:10.474 } 00:22:10.474 ] 00:22:10.474 }' 00:22:10.740 [2024-06-07 16:30:37.340684] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:10.740 [2024-06-07 16:30:37.340733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144900 ] 00:22:10.740 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.740 [2024-06-07 16:30:37.412706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.740 [2024-06-07 16:30:37.466240] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.001 [2024-06-07 16:30:37.599790] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.572 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:11.572 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:22:11.572 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.572 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:11.572 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.572 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.572 Running I/O for 1 seconds... 00:22:12.958 00:22:12.958 Latency(us) 00:22:12.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.958 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:12.958 Verification LBA range: start 0x0 length 0x2000 00:22:12.958 nvme0n1 : 1.07 2568.98 10.04 0.00 0.00 48467.69 5734.40 63351.47 00:22:12.958 =================================================================================================================== 00:22:12.958 Total : 2568.98 10.04 0.00 0.00 48467.69 5734.40 63351.47 00:22:12.958 0 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:12.958 nvmf_trace.0 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3144900 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3144900 ']' 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3144900 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3144900 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3144900' 00:22:12.958 killing process with pid 3144900 00:22:12.958 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3144900 00:22:12.958 Received shutdown signal, test time was about 1.000000 seconds 00:22:12.958 00:22:12.958 Latency(us) 00:22:12.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.958 =================================================================================================================== 00:22:12.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3144900 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.959 rmmod nvme_tcp 00:22:12.959 rmmod nvme_fabrics 00:22:12.959 rmmod nvme_keyring 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3144825 ']' 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3144825 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3144825 ']' 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3144825 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:12.959 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3144825 00:22:13.220 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3144825' 00:22:13.221 killing process with pid 3144825 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3144825 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3144825 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.221 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.768 16:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.768 16:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GDZ8rZupnu /tmp/tmp.lWjlSLdNjl /tmp/tmp.hJNDrTDA8u 00:22:15.768 00:22:15.768 real 1m23.681s 00:22:15.768 user 2m8.340s 00:22:15.768 sys 0m27.765s 00:22:15.768 16:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:15.768 16:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.768 ************************************ 00:22:15.768 END TEST nvmf_tls 00:22:15.768 ************************************ 00:22:15.768 16:30:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:15.768 16:30:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:15.768 16:30:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:15.768 16:30:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.768 ************************************ 00:22:15.768 START TEST nvmf_fips 00:22:15.768 ************************************ 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:15.768 * Looking for test storage... 00:22:15.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:15.768 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:22:15.769 Error setting digest 00:22:15.769 00C2C3E2277F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:15.769 00C2C3E2277F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.769 16:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.377 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.378 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:22:22.637 00:22:22.637 --- 10.0.0.2 ping statistics --- 00:22:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.637 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:22:22.637 00:22:22.637 --- 10.0.0.1 ping statistics --- 00:22:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.637 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.637 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3149549 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3149549 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 3149549 ']' 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:22.897 16:30:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:22.897 [2024-06-07 16:30:49.595835] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:22.897 [2024-06-07 16:30:49.595902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.897 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.897 [2024-06-07 16:30:49.686091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.157 [2024-06-07 16:30:49.780065] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.157 [2024-06-07 16:30:49.780124] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.157 [2024-06-07 16:30:49.780132] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.157 [2024-06-07 16:30:49.780139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.157 [2024-06-07 16:30:49.780145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.157 [2024-06-07 16:30:49.780169] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:23.728 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.728 [2024-06-07 16:30:50.551808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.728 [2024-06-07 16:30:50.567790] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.728 [2024-06-07 16:30:50.568036] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.988 [2024-06-07 16:30:50.597970] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:23.988 malloc0 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3149900 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3149900 /var/tmp/bdevperf.sock 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 3149900 ']' 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.988 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:23.989 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.989 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:23.989 16:30:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.989 [2024-06-07 16:30:50.691236] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:23.989 [2024-06-07 16:30:50.691315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149900 ] 00:22:23.989 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.989 [2024-06-07 16:30:50.748257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.989 [2024-06-07 16:30:50.812094] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.934 16:30:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:24.934 16:30:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:22:24.934 16:30:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:24.934 [2024-06-07 16:30:51.595924] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.934 [2024-06-07 16:30:51.595990] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:24.934 TLSTESTn1 00:22:24.934 16:30:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.934 Running I/O for 10 seconds... 00:22:37.173 00:22:37.173 Latency(us) 00:22:37.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.173 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:37.173 Verification LBA range: start 0x0 length 0x2000 00:22:37.173 TLSTESTn1 : 10.02 5251.34 20.51 0.00 0.00 24333.53 5980.16 64662.19 00:22:37.173 =================================================================================================================== 00:22:37.173 Total : 5251.34 20.51 0.00 0.00 24333.53 5980.16 64662.19 00:22:37.173 0 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:37.173 nvmf_trace.0 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3149900 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 3149900 ']' 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 3149900 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3149900 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3149900' 00:22:37.173 killing process with pid 3149900 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 3149900 00:22:37.173 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.173 00:22:37.173 Latency(us) 00:22:37.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.173 =================================================================================================================== 00:22:37.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.173 [2024-06-07 16:31:01.988474] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.173 16:31:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 3149900 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:37.173 rmmod nvme_tcp 00:22:37.173 rmmod nvme_fabrics 00:22:37.173 rmmod nvme_keyring 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3149549 ']' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3149549 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 3149549 ']' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 3149549 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3149549 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3149549' 00:22:37.173 killing process with pid 3149549 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 3149549 00:22:37.173 [2024-06-07 16:31:02.237560] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 3149549 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.173 16:31:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.745 16:31:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.745 16:31:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:37.745 00:22:37.745 real 0m22.296s 00:22:37.745 user 0m23.600s 00:22:37.745 sys 0m9.203s 00:22:37.745 16:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:37.745 16:31:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:37.745 ************************************ 00:22:37.745 END TEST nvmf_fips 00:22:37.745 ************************************ 00:22:37.745 16:31:04 nvmf_tcp -- nvmf/nvmf.sh@63 -- # run_test nvmf_kernel_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/spdk_vs_kernel_tls.sh --transport=tcp 00:22:37.745 16:31:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:37.745 16:31:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:37.745 16:31:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.745 ************************************ 00:22:37.745 START TEST nvmf_kernel_tls 00:22:37.745 ************************************ 00:22:37.745 16:31:04 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/spdk_vs_kernel_tls.sh --transport=tcp 00:22:37.745 Joined session keyring: 192142816 00:22:38.007 * Looking for test storage... 00:22:38.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@7 -- # uname -s 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@5 -- # export PATH 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@47 -- # : 0 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@13 -- # fio_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@14 -- # bdevperf_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@16 -- # SPEC_KEY=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@19 -- # SPEC_SUBSYSNQN=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@20 -- # SPEC_HOSTID=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@21 -- # SPEC_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@22 -- # PSK_IDENTITY='NVMe0R01 nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2' 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@23 -- # TLSHD_CONF=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/tlshd.conf 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@24 -- # SPDK_PSK_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@25 -- # PSK_NAME=psk0 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@26 -- # CONTROLLER_NAME=TLSTEST 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@27 -- # nvmet=/sys/kernel/config/nvmet 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@28 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:38.007 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@29 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:38.008 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@30 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:38.008 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@31 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:38.008 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@108 -- # '[' tcp '!=' tcp ']' 00:22:38.008 16:31:04 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:41.360 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:22:41.360 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:22:41.360 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@115 -- # nvmftestinit 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.620 16:31:08 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@296 -- # e810=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@297 -- # x722=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.763 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:49.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:22:49.764 00:22:49.764 --- 10.0.0.2 ping statistics --- 00:22:49.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.764 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:22:49.764 00:22:49.764 --- 10.0.0.1 ping statistics --- 00:22:49.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.764 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@422 -- # return 0 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@117 -- # timing_enter prepare_keyring_and_daemon 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@119 -- # keyctl show 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@119 -- # awk '{print $1}' 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@119 -- # tail -1 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@119 -- # session_id=192142816 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@120 -- # keyring_name=test_192142816 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@121 -- # keyctl newring test_192142816 192142816 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@121 -- # keyring_id=362164965 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@122 -- # keyctl setperm 362164965 0x3f3f0b00 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@124 -- # key_name=test_key_192142816 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/tls_psk_print -k NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: -s nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -n nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@126 -- # keyctl add psk 'NVMe0R01 nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2' '��f�j��i��F�{��=8���&LM��u�F' 362164965 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@126 -- # key_id=785685931 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@128 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@129 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@131 -- # construct_tlshd_conf test_192142816 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@48 -- # local keyring_name=test_192142816 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@49 -- # cat 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@133 -- # tlshdpid=3157744 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@135 -- # timing_exit prepare_keyring_and_daemon 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@132 -- # tlshd -s -c /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/tlshd.conf 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 tlshd[3157744]: Built from ktls-utils 0.10 on Oct 7 2023 00:00:00 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@138 -- # timing_enter start_nvmf_tgt 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@140 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@481 -- # nvmfpid=3157766 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@482 -- # waitforlisten 3157766 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@830 -- # '[' -z 3157766 ']' 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:49.764 16:31:15 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 [2024-06-07 16:31:15.747630] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:49.764 [2024-06-07 16:31:15.747678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.764 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.764 [2024-06-07 16:31:15.831049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.764 [2024-06-07 16:31:15.905307] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.764 [2024-06-07 16:31:15.905362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.764 [2024-06-07 16:31:15.905375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.764 [2024-06-07 16:31:15.905381] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.764 [2024-06-07 16:31:15.905387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.764 [2024-06-07 16:31:15.905423] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@863 -- # return 0 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@141 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@142 -- # waitforlisten 3157766 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@830 -- # '[' -z 3157766 ']' 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:49.764 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@863 -- # return 0 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@143 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@77 -- # local psk_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@79 -- # rpc_cmd sock_impl_set_options -i ssl --enable-ktls --tls-version 13 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@80 -- # rpc_cmd framework_start_init 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@81 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.025 [2024-06-07 16:31:16.841664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -s SPDKISFASTANDAWESOME -m 10 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -t tcp -a 10.0.0.2 -s 4420 -k -c 1 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.025 [2024-06-07 16:31:16.865710] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.025 [2024-06-07 16:31:16.865992] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@85 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.025 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.286 malloc0 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@86 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 malloc0 -n 1 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@88 -- # rpc_cmd keyring_file_add_key psk0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@90 -- # rpc_cmd nvmf_subsystem_add_host nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --psk psk0 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@145 -- # timing_exit start_nvmf_tgt 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.286 16:31:16 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@147 -- # nvme connect --nqn=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 --traddr=10.0.0.2 --trsvcid=4420 --transport=tcp --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --hostid=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --tls -o normal --verbose --tls_key=785685931 --keyring=362164965 -i 1 00:22:50.546 tlshd[3158134]: Name or service not known 00:22:50.546 tlshd[3158134]: Handshake with unknown (10.0.0.2) was successful 00:22:50.807 tlshd[3158136]: Name or service not known 00:22:50.807 tlshd[3158136]: Handshake with unknown (10.0.0.2) was successful 00:22:50.807 nvme0: nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 connected 00:22:50.807 device: nvme0 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@158 -- # waitfornvmeserial SPDKISFASTANDAWESOME 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@94 -- # local retries=5 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@95 -- # local serial=SPDKISFASTANDAWESOME 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@97 -- # (( retries-- )) 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@98 -- # nvmes=($(ls "/sys/class/nvme")) 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@98 -- # ls /sys/class/nvme 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@99 -- # for nvme in "${nvmes[@]}" 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@100 -- # cat /sys/class/nvme/nvme0/serial 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@100 -- # '[' SPDKISFASTANDAWESOME = SPDKISFASTANDAWESOME ']' 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@101 -- # return 0 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@160 -- # killprocess 3157766 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 3157766 ']' 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 3157766 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # uname 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3157766 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3157766' 00:22:50.807 killing process with pid 3157766 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@968 -- # kill 3157766 00:22:50.807 16:31:17 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@973 -- # wait 3157766 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@163 -- # nvmet_tls_init 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@72 -- # get_main_ns_ip 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@747 -- # local ip 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # ip_candidates=() 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # local -A ip_candidates 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@72 -- # configure_kernel_target nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 10.0.0.1 4422 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@632 -- # local kernel_name=nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 kernel_target_ip=10.0.0.1 nvmf_port=4422 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/namespaces/1 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@639 -- # local block nvme 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:51.068 16:31:17 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:54.366 Waiting for block devices as requested 00:22:54.366 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:54.366 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:54.366 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:54.366 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:54.366 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:54.366 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:54.627 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:54.627 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:54.627 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:22:54.888 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:54.888 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:55.149 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:55.149 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:55.149 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:55.149 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:55.409 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:55.409 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:22:55.669 No valid GPT data, bailing 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@391 -- # pt= 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- scripts/common.sh@392 -- # return 1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/namespaces/1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@663 -- # echo SPDK-test 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@665 -- # echo 1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@667 -- # [[ -b /dev/nvme1n1 ]] 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@673 -- # echo /dev/nvme1n1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@674 -- # echo 1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@677 -- # echo tcp 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@678 -- # echo 4422 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@679 -- # echo ipv4 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:55.669 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4422 00:22:55.930 00:22:55.930 Discovery Log Number of Records 2, Generation counter 2 00:22:55.930 =====Discovery Log Entry 0====== 00:22:55.930 trtype: tcp 00:22:55.930 adrfam: ipv4 00:22:55.930 subtype: current discovery subsystem 00:22:55.930 treq: not specified, sq flow control disable supported 00:22:55.930 portid: 1 00:22:55.930 trsvcid: 4422 00:22:55.930 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:55.930 traddr: 10.0.0.1 00:22:55.930 eflags: none 00:22:55.930 sectype: none 00:22:55.930 =====Discovery Log Entry 1====== 00:22:55.930 trtype: tcp 00:22:55.930 adrfam: ipv4 00:22:55.930 subtype: nvme subsystem 00:22:55.930 treq: not specified, sq flow control disable supported 00:22:55.930 portid: 1 00:22:55.930 trsvcid: 4422 00:22:55.930 subnqn: nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:55.930 traddr: 10.0.0.1 00:22:55.930 eflags: none 00:22:55.930 sectype: none 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@73 -- # post_configure_kernel_target 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@61 -- # echo 0 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@62 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@63 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/allowed_hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@66 -- # rm /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@67 -- # echo tls1.3 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@68 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@166 -- # bdevperfpid=3160077 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@167 -- # waitforlisten 3160077 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@165 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@830 -- # '[' -z 3160077 ']' 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:55.930 16:31:22 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.930 [2024-06-07 16:31:22.608136] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:22:55.930 [2024-06-07 16:31:22.608206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160077 ] 00:22:55.930 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.930 [2024-06-07 16:31:22.664239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.930 [2024-06-07 16:31:22.728301] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@863 -- # return 0 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@169 -- # rpc_cmd keyring_file_add_key psk0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@171 -- # get_main_ns_ip 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@747 -- # local ip 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # ip_candidates=() 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@748 -- # local -A ip_candidates 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@171 -- # rpc_cmd bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.1 -s 4422 -f ipv4 -n nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 -q nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 --psk psk0 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.871 [2024-06-07 16:31:23.391730] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.871 tlshd[3160275]: Handshake with spdk-cyp-09 (10.0.0.1) was successful 00:22:56.871 tlshd[3160324]: Handshake with spdk-cyp-09 (10.0.0.1) was successful 00:22:56.871 TLSTESTn1 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@174 -- # rpc_cmd bdev_nvme_get_controllers -n TLSTEST 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.871 [ 00:22:56.871 { 00:22:56.871 "name": "TLSTEST", 00:22:56.871 "ctrlrs": [ 00:22:56.871 { 00:22:56.871 "state": "enabled", 00:22:56.871 "trid": { 00:22:56.871 "trtype": "TCP", 00:22:56.871 "adrfam": "IPv4", 00:22:56.871 "traddr": "10.0.0.1", 00:22:56.871 "trsvcid": "4422", 00:22:56.871 "subnqn": "nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2" 00:22:56.871 }, 00:22:56.871 "cntlid": 1, 00:22:56.871 "host": { 00:22:56.871 "nqn": "nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6", 00:22:56.871 "addr": "", 00:22:56.871 "svcid": "" 00:22:56.871 } 00:22:56.871 } 00:22:56.871 ] 00:22:56.871 } 00:22:56.871 ] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@176 -- # rpc_cmd bdev_nvme_detach_controller TLSTEST 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@178 -- # trap - SIGINT SIGTERM EXIT 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@179 -- # cleanup 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@34 -- # killprocess 3157744 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 3157744 ']' 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 3157744 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # uname 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3157744 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # process_name=tlshd 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@959 -- # '[' tlshd = sudo ']' 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3157744' 00:22:56.871 killing process with pid 3157744 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@968 -- # kill 3157744 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@973 -- # wait 3157744 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@34 -- # : 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@35 -- # killprocess 3160077 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 3160077 ']' 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 3160077 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # uname 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3160077 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3160077' 00:22:56.871 killing process with pid 3160077 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@968 -- # kill 3160077 00:22:56.871 Received shutdown signal, test time was about 0.000000 seconds 00:22:56.871 00:22:56.871 Latency(us) 00:22:56.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.871 =================================================================================================================== 00:22:56.871 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:56.871 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@973 -- # wait 3160077 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@36 -- # nvmftestfini 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@117 -- # sync 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@120 -- # set +e 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.131 rmmod nvme_tcp 00:22:57.131 rmmod nvme_fabrics 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@124 -- # set -e 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@125 -- # return 0 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@489 -- # '[' -n 3157766 ']' 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@490 -- # killprocess 3157766 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@949 -- # '[' -z 3157766 ']' 00:22:57.131 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@953 -- # kill -0 3157766 00:22:57.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3157766) - No such process 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@976 -- # echo 'Process with pid 3157766 is not found' 00:22:57.132 Process with pid 3157766 is not found 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.132 16:31:23 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.672 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.672 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@37 -- # rm -rf /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/allowed_hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:59.672 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@38 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2014-08.org.nvmexpress:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 00:22:59.672 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@39 -- # clean_kernel_target 00:22:59.672 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 ]] 00:22:59.672 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@691 -- # echo 0 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2/namespaces/1 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2014-08.org.nvmexpress:uuid:36ebf5a9-1df9-47b3-a6d0-e9ba32e428a2 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:22:59.673 16:31:25 nvmf_tcp.nvmf_kernel_tls -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:02.213 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:02.213 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:02.473 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:02.734 16:31:29 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@40 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/key.txt 00:23:02.734 16:31:29 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@41 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/tlshd.conf 00:23:02.734 16:31:29 nvmf_tcp.nvmf_kernel_tls -- nvmf/spdk_vs_kernel_tls.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:06.067 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:23:06.067 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:06.067 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:06.326 00:23:06.326 real 0m28.671s 00:23:06.326 user 0m10.792s 00:23:06.326 sys 0m15.880s 00:23:06.326 16:31:33 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:06.326 16:31:33 nvmf_tcp.nvmf_kernel_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.326 ************************************ 00:23:06.326 END TEST nvmf_kernel_tls 00:23:06.326 ************************************ 00:23:06.588 16:31:33 nvmf_tcp -- nvmf/nvmf.sh@67 -- # '[' 0 -eq 1 ']' 00:23:06.588 16:31:33 nvmf_tcp -- nvmf/nvmf.sh@73 -- # [[ phy == phy ]] 00:23:06.588 16:31:33 nvmf_tcp -- nvmf/nvmf.sh@74 -- # '[' tcp = tcp ']' 00:23:06.588 16:31:33 nvmf_tcp -- nvmf/nvmf.sh@75 -- # gather_supported_nvmf_pci_devs 00:23:06.588 16:31:33 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.588 16:31:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:13.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:13.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:13.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:13.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/nvmf.sh@76 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/nvmf.sh@77 -- # (( 2 > 0 )) 00:23:13.178 16:31:39 nvmf_tcp -- nvmf/nvmf.sh@78 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:13.178 16:31:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:13.178 16:31:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:13.178 16:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.178 ************************************ 00:23:13.178 START TEST nvmf_perf_adq 00:23:13.178 ************************************ 00:23:13.178 16:31:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:13.440 * Looking for test storage... 00:23:13.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.440 16:31:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:20.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:20.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:20.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:20.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:20.032 16:31:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:21.417 16:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:23.331 16:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.625 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:28.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:28.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:28.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:28.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:23:28.626 00:23:28.626 --- 10.0.0.2 ping statistics --- 00:23:28.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.626 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:23:28.626 00:23:28.626 --- 10.0.0.1 ping statistics --- 00:23:28.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.626 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.626 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3172796 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3172796 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 3172796 ']' 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:28.627 16:31:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.889 [2024-06-07 16:31:55.516114] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:23:28.889 [2024-06-07 16:31:55.516185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.889 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.889 [2024-06-07 16:31:55.587919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.889 [2024-06-07 16:31:55.666199] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.889 [2024-06-07 16:31:55.666237] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.889 [2024-06-07 16:31:55.666246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.889 [2024-06-07 16:31:55.666253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.889 [2024-06-07 16:31:55.666259] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.889 [2024-06-07 16:31:55.666394] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.889 [2024-06-07 16:31:55.666519] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.889 [2024-06-07 16:31:55.666573] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.889 [2024-06-07 16:31:55.666575] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.461 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:29.461 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:23:29.461 16:31:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.461 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:29.461 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.721 16:31:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.721 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:29.721 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:29.721 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:29.721 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 [2024-06-07 16:31:56.461352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 Malloc1 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:29.722 [2024-06-07 16:31:56.520683] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3172990 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:29.722 16:31:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:29.722 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:32.293 "tick_rate": 2400000000, 00:23:32.293 "poll_groups": [ 00:23:32.293 { 00:23:32.293 "name": "nvmf_tgt_poll_group_000", 00:23:32.293 "admin_qpairs": 1, 00:23:32.293 "io_qpairs": 1, 00:23:32.293 "current_admin_qpairs": 1, 00:23:32.293 "current_io_qpairs": 1, 00:23:32.293 "pending_bdev_io": 0, 00:23:32.293 "completed_nvme_io": 19839, 00:23:32.293 "transports": [ 00:23:32.293 { 00:23:32.293 "trtype": "TCP" 00:23:32.293 } 00:23:32.293 ] 00:23:32.293 }, 00:23:32.293 { 00:23:32.293 "name": "nvmf_tgt_poll_group_001", 00:23:32.293 "admin_qpairs": 0, 00:23:32.293 "io_qpairs": 1, 00:23:32.293 "current_admin_qpairs": 0, 00:23:32.293 "current_io_qpairs": 1, 00:23:32.293 "pending_bdev_io": 0, 00:23:32.293 "completed_nvme_io": 28401, 00:23:32.293 "transports": [ 00:23:32.293 { 00:23:32.293 "trtype": "TCP" 00:23:32.293 } 00:23:32.293 ] 00:23:32.293 }, 00:23:32.293 { 00:23:32.293 "name": "nvmf_tgt_poll_group_002", 00:23:32.293 "admin_qpairs": 0, 00:23:32.293 "io_qpairs": 1, 00:23:32.293 "current_admin_qpairs": 0, 00:23:32.293 "current_io_qpairs": 1, 00:23:32.293 "pending_bdev_io": 0, 00:23:32.293 "completed_nvme_io": 20537, 00:23:32.293 "transports": [ 00:23:32.293 { 00:23:32.293 "trtype": "TCP" 00:23:32.293 } 00:23:32.293 ] 00:23:32.293 }, 00:23:32.293 { 00:23:32.293 "name": "nvmf_tgt_poll_group_003", 00:23:32.293 "admin_qpairs": 0, 00:23:32.293 "io_qpairs": 1, 00:23:32.293 "current_admin_qpairs": 0, 00:23:32.293 "current_io_qpairs": 1, 00:23:32.293 "pending_bdev_io": 0, 00:23:32.293 "completed_nvme_io": 20485, 00:23:32.293 "transports": [ 00:23:32.293 { 00:23:32.293 "trtype": "TCP" 00:23:32.293 } 00:23:32.293 ] 00:23:32.293 } 00:23:32.293 ] 00:23:32.293 }' 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:32.293 16:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3172990 00:23:40.433 Initializing NVMe Controllers 00:23:40.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:40.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:40.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:40.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:40.433 Initialization complete. Launching workers. 00:23:40.433 ======================================================== 00:23:40.433 Latency(us) 00:23:40.433 Device Information : IOPS MiB/s Average min max 00:23:40.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11336.24 44.28 5646.35 1227.13 9090.78 00:23:40.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14799.52 57.81 4324.59 977.03 9309.06 00:23:40.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14089.83 55.04 4541.70 1405.09 11758.09 00:23:40.433 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13342.33 52.12 4796.32 1314.32 11073.66 00:23:40.433 ======================================================== 00:23:40.433 Total : 53567.92 209.25 4778.91 977.03 11758.09 00:23:40.433 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.433 rmmod nvme_tcp 00:23:40.433 rmmod nvme_fabrics 00:23:40.433 rmmod nvme_keyring 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3172796 ']' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3172796 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 3172796 ']' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 3172796 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3172796 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3172796' 00:23:40.433 killing process with pid 3172796 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 3172796 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 3172796 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.433 16:32:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.348 16:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.348 16:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:42.348 16:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:44.264 16:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:46.180 16:32:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.474 16:32:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.474 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:23:51.475 00:23:51.475 --- 10.0.0.2 ping statistics --- 00:23:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.475 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:23:51.475 00:23:51.475 --- 10.0.0.1 ping statistics --- 00:23:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.475 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:51.475 net.core.busy_poll = 1 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:51.475 net.core.busy_read = 1 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:51.475 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3177704 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3177704 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 3177704 ']' 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:51.735 16:32:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 [2024-06-07 16:32:18.402418] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:23:51.735 [2024-06-07 16:32:18.402469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.735 [2024-06-07 16:32:18.469119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.735 [2024-06-07 16:32:18.534342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.735 [2024-06-07 16:32:18.534377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.735 [2024-06-07 16:32:18.534385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.735 [2024-06-07 16:32:18.534391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.736 [2024-06-07 16:32:18.534397] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.736 [2024-06-07 16:32:18.534443] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.736 [2024-06-07 16:32:18.534568] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.736 [2024-06-07 16:32:18.534780] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.736 [2024-06-07 16:32:18.534781] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.677 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 [2024-06-07 16:32:19.332655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 Malloc1 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:52.678 [2024-06-07 16:32:19.389162] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3177811 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:52.678 16:32:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:52.678 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:54.594 "tick_rate": 2400000000, 00:23:54.594 "poll_groups": [ 00:23:54.594 { 00:23:54.594 "name": "nvmf_tgt_poll_group_000", 00:23:54.594 "admin_qpairs": 1, 00:23:54.594 "io_qpairs": 2, 00:23:54.594 "current_admin_qpairs": 1, 00:23:54.594 "current_io_qpairs": 2, 00:23:54.594 "pending_bdev_io": 0, 00:23:54.594 "completed_nvme_io": 40110, 00:23:54.594 "transports": [ 00:23:54.594 { 00:23:54.594 "trtype": "TCP" 00:23:54.594 } 00:23:54.594 ] 00:23:54.594 }, 00:23:54.594 { 00:23:54.594 "name": "nvmf_tgt_poll_group_001", 00:23:54.594 "admin_qpairs": 0, 00:23:54.594 "io_qpairs": 2, 00:23:54.594 "current_admin_qpairs": 0, 00:23:54.594 "current_io_qpairs": 2, 00:23:54.594 "pending_bdev_io": 0, 00:23:54.594 "completed_nvme_io": 34606, 00:23:54.594 "transports": [ 00:23:54.594 { 00:23:54.594 "trtype": "TCP" 00:23:54.594 } 00:23:54.594 ] 00:23:54.594 }, 00:23:54.594 { 00:23:54.594 "name": "nvmf_tgt_poll_group_002", 00:23:54.594 "admin_qpairs": 0, 00:23:54.594 "io_qpairs": 0, 00:23:54.594 "current_admin_qpairs": 0, 00:23:54.594 "current_io_qpairs": 0, 00:23:54.594 "pending_bdev_io": 0, 00:23:54.594 "completed_nvme_io": 0, 00:23:54.594 "transports": [ 00:23:54.594 { 00:23:54.594 "trtype": "TCP" 00:23:54.594 } 00:23:54.594 ] 00:23:54.594 }, 00:23:54.594 { 00:23:54.594 "name": "nvmf_tgt_poll_group_003", 00:23:54.594 "admin_qpairs": 0, 00:23:54.594 "io_qpairs": 0, 00:23:54.594 "current_admin_qpairs": 0, 00:23:54.594 "current_io_qpairs": 0, 00:23:54.594 "pending_bdev_io": 0, 00:23:54.594 "completed_nvme_io": 0, 00:23:54.594 "transports": [ 00:23:54.594 { 00:23:54.594 "trtype": "TCP" 00:23:54.594 } 00:23:54.594 ] 00:23:54.594 } 00:23:54.594 ] 00:23:54.594 }' 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:54.594 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:54.855 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:54.855 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:54.855 16:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3177811 00:24:03.048 Initializing NVMe Controllers 00:24:03.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:03.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:03.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:03.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:03.048 Initialization complete. Launching workers. 00:24:03.048 ======================================================== 00:24:03.048 Latency(us) 00:24:03.048 Device Information : IOPS MiB/s Average min max 00:24:03.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10807.00 42.21 5938.10 1306.53 50339.81 00:24:03.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11856.60 46.31 5417.80 1151.54 49682.61 00:24:03.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7671.70 29.97 8344.40 1277.20 51692.43 00:24:03.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9840.10 38.44 6504.47 1217.02 49987.79 00:24:03.048 ======================================================== 00:24:03.048 Total : 40175.40 156.94 6382.76 1151.54 51692.43 00:24:03.048 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.048 rmmod nvme_tcp 00:24:03.048 rmmod nvme_fabrics 00:24:03.048 rmmod nvme_keyring 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3177704 ']' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3177704 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 3177704 ']' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 3177704 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3177704 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3177704' 00:24:03.048 killing process with pid 3177704 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 3177704 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 3177704 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.048 16:32:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.350 16:32:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.350 16:32:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:06.350 00:24:06.350 real 0m52.926s 00:24:06.350 user 2m47.076s 00:24:06.350 sys 0m11.524s 00:24:06.350 16:32:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:06.350 16:32:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:06.350 ************************************ 00:24:06.350 END TEST nvmf_perf_adq 00:24:06.350 ************************************ 00:24:06.350 16:32:32 nvmf_tcp -- nvmf/nvmf.sh@84 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:06.350 16:32:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:06.350 16:32:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:06.350 16:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.350 ************************************ 00:24:06.350 START TEST nvmf_shutdown 00:24:06.350 ************************************ 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:06.350 * Looking for test storage... 00:24:06.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:06.350 ************************************ 00:24:06.350 START TEST nvmf_shutdown_tc1 00:24:06.350 ************************************ 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.350 16:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.612 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.612 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.612 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.612 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:14.613 00:24:14.613 --- 10.0.0.2 ping statistics --- 00:24:14.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.613 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:14.613 00:24:14.613 --- 10.0.0.1 ping statistics --- 00:24:14.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.613 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3184267 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3184267 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 3184267 ']' 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:14.613 16:32:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.613 [2024-06-07 16:32:40.473198] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:14.613 [2024-06-07 16:32:40.473261] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.613 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.613 [2024-06-07 16:32:40.560990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.613 [2024-06-07 16:32:40.655981] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.613 [2024-06-07 16:32:40.656038] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.613 [2024-06-07 16:32:40.656047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.613 [2024-06-07 16:32:40.656054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.613 [2024-06-07 16:32:40.656060] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.613 [2024-06-07 16:32:40.656190] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.613 [2024-06-07 16:32:40.656357] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.613 [2024-06-07 16:32:40.656524] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:24:14.613 [2024-06-07 16:32:40.656525] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.613 [2024-06-07 16:32:41.299940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.613 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.614 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:14.614 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:14.614 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:14.614 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.614 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.614 Malloc1 00:24:14.614 [2024-06-07 16:32:41.403412] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.614 Malloc2 00:24:14.614 Malloc3 00:24:14.874 Malloc4 00:24:14.874 Malloc5 00:24:14.874 Malloc6 00:24:14.874 Malloc7 00:24:14.874 Malloc8 00:24:14.874 Malloc9 00:24:15.136 Malloc10 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3184642 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3184642 /var/tmp/bdevperf.sock 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 3184642 ']' 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 [2024-06-07 16:32:41.862264] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:15.136 [2024-06-07 16:32:41.862373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.136 "hdgst": ${hdgst:-false}, 00:24:15.136 "ddgst": ${ddgst:-false} 00:24:15.136 }, 00:24:15.136 "method": "bdev_nvme_attach_controller" 00:24:15.136 } 00:24:15.136 EOF 00:24:15.136 )") 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.136 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.136 { 00:24:15.136 "params": { 00:24:15.136 "name": "Nvme$subsystem", 00:24:15.136 "trtype": "$TEST_TRANSPORT", 00:24:15.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.136 "adrfam": "ipv4", 00:24:15.136 "trsvcid": "$NVMF_PORT", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.137 "hdgst": ${hdgst:-false}, 00:24:15.137 "ddgst": ${ddgst:-false} 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 } 00:24:15.137 EOF 00:24:15.137 )") 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:15.137 { 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme$subsystem", 00:24:15.137 "trtype": "$TEST_TRANSPORT", 00:24:15.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "$NVMF_PORT", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.137 "hdgst": ${hdgst:-false}, 00:24:15.137 "ddgst": ${ddgst:-false} 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 } 00:24:15.137 EOF 00:24:15.137 )") 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:15.137 16:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme1", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme2", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme3", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme4", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme5", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme6", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme7", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme8", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme9", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 },{ 00:24:15.137 "params": { 00:24:15.137 "name": "Nvme10", 00:24:15.137 "trtype": "tcp", 00:24:15.137 "traddr": "10.0.0.2", 00:24:15.137 "adrfam": "ipv4", 00:24:15.137 "trsvcid": "4420", 00:24:15.137 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:15.137 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:15.137 "hdgst": false, 00:24:15.137 "ddgst": false 00:24:15.137 }, 00:24:15.137 "method": "bdev_nvme_attach_controller" 00:24:15.137 }' 00:24:15.137 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.137 [2024-06-07 16:32:41.926971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.398 [2024-06-07 16:32:41.991804] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3184642 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:16.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3184642 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:16.812 16:32:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3184267 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.753 { 00:24:17.753 "params": { 00:24:17.753 "name": "Nvme$subsystem", 00:24:17.753 "trtype": "$TEST_TRANSPORT", 00:24:17.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.753 "adrfam": "ipv4", 00:24:17.753 "trsvcid": "$NVMF_PORT", 00:24:17.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.753 "hdgst": ${hdgst:-false}, 00:24:17.753 "ddgst": ${ddgst:-false} 00:24:17.753 }, 00:24:17.753 "method": "bdev_nvme_attach_controller" 00:24:17.753 } 00:24:17.753 EOF 00:24:17.753 )") 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.753 { 00:24:17.753 "params": { 00:24:17.753 "name": "Nvme$subsystem", 00:24:17.753 "trtype": "$TEST_TRANSPORT", 00:24:17.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.753 "adrfam": "ipv4", 00:24:17.753 "trsvcid": "$NVMF_PORT", 00:24:17.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.753 "hdgst": ${hdgst:-false}, 00:24:17.753 "ddgst": ${ddgst:-false} 00:24:17.753 }, 00:24:17.753 "method": "bdev_nvme_attach_controller" 00:24:17.753 } 00:24:17.753 EOF 00:24:17.753 )") 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.753 { 00:24:17.753 "params": { 00:24:17.753 "name": "Nvme$subsystem", 00:24:17.753 "trtype": "$TEST_TRANSPORT", 00:24:17.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.753 "adrfam": "ipv4", 00:24:17.753 "trsvcid": "$NVMF_PORT", 00:24:17.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.753 "hdgst": ${hdgst:-false}, 00:24:17.753 "ddgst": ${ddgst:-false} 00:24:17.753 }, 00:24:17.753 "method": "bdev_nvme_attach_controller" 00:24:17.753 } 00:24:17.753 EOF 00:24:17.753 )") 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.753 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 [2024-06-07 16:32:44.411359] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:17.754 [2024-06-07 16:32:44.411433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185038 ] 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.754 { 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme$subsystem", 00:24:17.754 "trtype": "$TEST_TRANSPORT", 00:24:17.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "$NVMF_PORT", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.754 "hdgst": ${hdgst:-false}, 00:24:17.754 "ddgst": ${ddgst:-false} 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 } 00:24:17.754 EOF 00:24:17.754 )") 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:17.754 16:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme1", 00:24:17.754 "trtype": "tcp", 00:24:17.754 "traddr": "10.0.0.2", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "4420", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.754 "hdgst": false, 00:24:17.754 "ddgst": false 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 },{ 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme2", 00:24:17.754 "trtype": "tcp", 00:24:17.754 "traddr": "10.0.0.2", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "4420", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:17.754 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:17.754 "hdgst": false, 00:24:17.754 "ddgst": false 00:24:17.754 }, 00:24:17.754 "method": "bdev_nvme_attach_controller" 00:24:17.754 },{ 00:24:17.754 "params": { 00:24:17.754 "name": "Nvme3", 00:24:17.754 "trtype": "tcp", 00:24:17.754 "traddr": "10.0.0.2", 00:24:17.754 "adrfam": "ipv4", 00:24:17.754 "trsvcid": "4420", 00:24:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme4", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme5", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme6", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme7", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme8", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme9", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 },{ 00:24:17.755 "params": { 00:24:17.755 "name": "Nvme10", 00:24:17.755 "trtype": "tcp", 00:24:17.755 "traddr": "10.0.0.2", 00:24:17.755 "adrfam": "ipv4", 00:24:17.755 "trsvcid": "4420", 00:24:17.755 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:17.755 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:17.755 "hdgst": false, 00:24:17.755 "ddgst": false 00:24:17.755 }, 00:24:17.755 "method": "bdev_nvme_attach_controller" 00:24:17.755 }' 00:24:17.755 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.755 [2024-06-07 16:32:44.474887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.755 [2024-06-07 16:32:44.542134] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.138 Running I/O for 1 seconds... 00:24:20.523 00:24:20.523 Latency(us) 00:24:20.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.523 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.523 Verification LBA range: start 0x0 length 0x400 00:24:20.523 Nvme1n1 : 1.15 222.57 13.91 0.00 0.00 284662.83 21080.75 270882.13 00:24:20.523 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.523 Verification LBA range: start 0x0 length 0x400 00:24:20.523 Nvme2n1 : 1.14 223.61 13.98 0.00 0.00 278455.89 20425.39 242920.11 00:24:20.524 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme3n1 : 1.14 225.49 14.09 0.00 0.00 271143.47 21845.33 244667.73 00:24:20.524 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme4n1 : 1.19 269.75 16.86 0.00 0.00 223256.06 16930.13 249910.61 00:24:20.524 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme5n1 : 1.16 221.56 13.85 0.00 0.00 266512.21 22391.47 249910.61 00:24:20.524 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme6n1 : 1.13 226.41 14.15 0.00 0.00 255677.44 38884.69 246415.36 00:24:20.524 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme7n1 : 1.19 268.04 16.75 0.00 0.00 213110.61 19988.48 230686.72 00:24:20.524 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme8n1 : 1.19 272.89 17.06 0.00 0.00 203903.18 5297.49 200977.07 00:24:20.524 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme9n1 : 1.16 221.03 13.81 0.00 0.00 247837.87 19660.80 253405.87 00:24:20.524 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.524 Verification LBA range: start 0x0 length 0x400 00:24:20.524 Nvme10n1 : 1.22 263.34 16.46 0.00 0.00 205823.57 13434.88 274377.39 00:24:20.524 =================================================================================================================== 00:24:20.524 Total : 2414.69 150.92 0.00 0.00 241937.47 5297.49 274377.39 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.524 rmmod nvme_tcp 00:24:20.524 rmmod nvme_fabrics 00:24:20.524 rmmod nvme_keyring 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3184267 ']' 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3184267 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 3184267 ']' 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 3184267 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3184267 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3184267' 00:24:20.524 killing process with pid 3184267 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 3184267 00:24:20.524 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 3184267 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.785 16:32:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.335 00:24:23.335 real 0m16.415s 00:24:23.335 user 0m33.362s 00:24:23.335 sys 0m6.589s 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.335 ************************************ 00:24:23.335 END TEST nvmf_shutdown_tc1 00:24:23.335 ************************************ 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:23.335 ************************************ 00:24:23.335 START TEST nvmf_shutdown_tc2 00:24:23.335 ************************************ 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.335 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:23.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:23.336 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:23.336 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:23.336 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:24:23.336 00:24:23.336 --- 10.0.0.2 ping statistics --- 00:24:23.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.336 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:24:23.336 16:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:24:23.336 00:24:23.336 --- 10.0.0.1 ping statistics --- 00:24:23.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.336 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3186299 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3186299 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3186299 ']' 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:23.336 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.336 [2024-06-07 16:32:50.109473] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:23.336 [2024-06-07 16:32:50.109543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.336 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.597 [2024-06-07 16:32:50.195141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.597 [2024-06-07 16:32:50.254777] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.597 [2024-06-07 16:32:50.254809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.597 [2024-06-07 16:32:50.254815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.597 [2024-06-07 16:32:50.254820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.597 [2024-06-07 16:32:50.254824] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.597 [2024-06-07 16:32:50.254937] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.597 [2024-06-07 16:32:50.255096] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.597 [2024-06-07 16:32:50.255240] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.597 [2024-06-07 16:32:50.255242] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.169 [2024-06-07 16:32:50.925969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.169 16:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.169 Malloc1 00:24:24.429 [2024-06-07 16:32:51.024647] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.429 Malloc2 00:24:24.429 Malloc3 00:24:24.429 Malloc4 00:24:24.429 Malloc5 00:24:24.429 Malloc6 00:24:24.429 Malloc7 00:24:24.429 Malloc8 00:24:24.691 Malloc9 00:24:24.691 Malloc10 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3186524 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3186524 /var/tmp/bdevperf.sock 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3186524 ']' 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.691 { 00:24:24.691 "params": { 00:24:24.691 "name": "Nvme$subsystem", 00:24:24.691 "trtype": "$TEST_TRANSPORT", 00:24:24.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.691 "adrfam": "ipv4", 00:24:24.691 "trsvcid": "$NVMF_PORT", 00:24:24.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.691 "hdgst": ${hdgst:-false}, 00:24:24.691 "ddgst": ${ddgst:-false} 00:24:24.691 }, 00:24:24.691 "method": "bdev_nvme_attach_controller" 00:24:24.691 } 00:24:24.691 EOF 00:24:24.691 )") 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.691 { 00:24:24.691 "params": { 00:24:24.691 "name": "Nvme$subsystem", 00:24:24.691 "trtype": "$TEST_TRANSPORT", 00:24:24.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.691 "adrfam": "ipv4", 00:24:24.691 "trsvcid": "$NVMF_PORT", 00:24:24.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.691 "hdgst": ${hdgst:-false}, 00:24:24.691 "ddgst": ${ddgst:-false} 00:24:24.691 }, 00:24:24.691 "method": "bdev_nvme_attach_controller" 00:24:24.691 } 00:24:24.691 EOF 00:24:24.691 )") 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.691 { 00:24:24.691 "params": { 00:24:24.691 "name": "Nvme$subsystem", 00:24:24.691 "trtype": "$TEST_TRANSPORT", 00:24:24.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.691 "adrfam": "ipv4", 00:24:24.691 "trsvcid": "$NVMF_PORT", 00:24:24.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.691 "hdgst": ${hdgst:-false}, 00:24:24.691 "ddgst": ${ddgst:-false} 00:24:24.691 }, 00:24:24.691 "method": "bdev_nvme_attach_controller" 00:24:24.691 } 00:24:24.691 EOF 00:24:24.691 )") 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.691 { 00:24:24.691 "params": { 00:24:24.691 "name": "Nvme$subsystem", 00:24:24.691 "trtype": "$TEST_TRANSPORT", 00:24:24.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.691 "adrfam": "ipv4", 00:24:24.691 "trsvcid": "$NVMF_PORT", 00:24:24.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.691 "hdgst": ${hdgst:-false}, 00:24:24.691 "ddgst": ${ddgst:-false} 00:24:24.691 }, 00:24:24.691 "method": "bdev_nvme_attach_controller" 00:24:24.691 } 00:24:24.691 EOF 00:24:24.691 )") 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.691 { 00:24:24.691 "params": { 00:24:24.691 "name": "Nvme$subsystem", 00:24:24.691 "trtype": "$TEST_TRANSPORT", 00:24:24.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.691 "adrfam": "ipv4", 00:24:24.691 "trsvcid": "$NVMF_PORT", 00:24:24.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.691 "hdgst": ${hdgst:-false}, 00:24:24.691 "ddgst": ${ddgst:-false} 00:24:24.691 }, 00:24:24.691 "method": "bdev_nvme_attach_controller" 00:24:24.691 } 00:24:24.691 EOF 00:24:24.691 )") 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.691 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.691 { 00:24:24.691 "params": { 00:24:24.691 "name": "Nvme$subsystem", 00:24:24.691 "trtype": "$TEST_TRANSPORT", 00:24:24.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.691 "adrfam": "ipv4", 00:24:24.691 "trsvcid": "$NVMF_PORT", 00:24:24.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.691 "hdgst": ${hdgst:-false}, 00:24:24.691 "ddgst": ${ddgst:-false} 00:24:24.691 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 } 00:24:24.692 EOF 00:24:24.692 )") 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.692 [2024-06-07 16:32:51.466621] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:24.692 [2024-06-07 16:32:51.466673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186524 ] 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.692 { 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme$subsystem", 00:24:24.692 "trtype": "$TEST_TRANSPORT", 00:24:24.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "$NVMF_PORT", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.692 "hdgst": ${hdgst:-false}, 00:24:24.692 "ddgst": ${ddgst:-false} 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 } 00:24:24.692 EOF 00:24:24.692 )") 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.692 { 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme$subsystem", 00:24:24.692 "trtype": "$TEST_TRANSPORT", 00:24:24.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "$NVMF_PORT", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.692 "hdgst": ${hdgst:-false}, 00:24:24.692 "ddgst": ${ddgst:-false} 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 } 00:24:24.692 EOF 00:24:24.692 )") 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.692 { 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme$subsystem", 00:24:24.692 "trtype": "$TEST_TRANSPORT", 00:24:24.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "$NVMF_PORT", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.692 "hdgst": ${hdgst:-false}, 00:24:24.692 "ddgst": ${ddgst:-false} 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 } 00:24:24.692 EOF 00:24:24.692 )") 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:24.692 { 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme$subsystem", 00:24:24.692 "trtype": "$TEST_TRANSPORT", 00:24:24.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "$NVMF_PORT", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.692 "hdgst": ${hdgst:-false}, 00:24:24.692 "ddgst": ${ddgst:-false} 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 } 00:24:24.692 EOF 00:24:24.692 )") 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:24.692 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:24.692 16:32:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme1", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme2", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme3", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme4", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme5", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme6", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme7", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme8", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme9", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 },{ 00:24:24.692 "params": { 00:24:24.692 "name": "Nvme10", 00:24:24.692 "trtype": "tcp", 00:24:24.692 "traddr": "10.0.0.2", 00:24:24.692 "adrfam": "ipv4", 00:24:24.692 "trsvcid": "4420", 00:24:24.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:24.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:24.692 "hdgst": false, 00:24:24.692 "ddgst": false 00:24:24.692 }, 00:24:24.692 "method": "bdev_nvme_attach_controller" 00:24:24.692 }' 00:24:24.692 [2024-06-07 16:32:51.527117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.956 [2024-06-07 16:32:51.591964] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.341 Running I/O for 10 seconds... 00:24:26.341 16:32:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:26.341 16:32:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:24:26.341 16:32:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:26.341 16:32:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.341 16:32:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.341 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.601 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:26.601 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:26.601 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:26.862 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3186524 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 3186524 ']' 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 3186524 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3186524 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3186524' 00:24:27.123 killing process with pid 3186524 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 3186524 00:24:27.123 16:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 3186524 00:24:27.123 Received shutdown signal, test time was about 0.963281 seconds 00:24:27.123 00:24:27.123 Latency(us) 00:24:27.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.123 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme1n1 : 0.93 217.21 13.58 0.00 0.00 288161.74 6171.31 227191.47 00:24:27.124 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme2n1 : 0.94 215.89 13.49 0.00 0.00 283790.95 6553.60 230686.72 00:24:27.124 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme3n1 : 0.95 270.40 16.90 0.00 0.00 223955.41 19551.57 249910.61 00:24:27.124 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme4n1 : 0.95 269.52 16.84 0.00 0.00 219862.61 11960.32 253405.87 00:24:27.124 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme5n1 : 0.93 206.23 12.89 0.00 0.00 280548.12 19005.44 251658.24 00:24:27.124 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme6n1 : 0.95 268.10 16.76 0.00 0.00 211626.24 22391.47 248162.99 00:24:27.124 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme7n1 : 0.96 267.50 16.72 0.00 0.00 207312.85 20097.71 246415.36 00:24:27.124 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme8n1 : 0.96 266.00 16.63 0.00 0.00 203848.11 19660.80 249910.61 00:24:27.124 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme9n1 : 0.94 203.33 12.71 0.00 0.00 259297.56 20862.29 277872.64 00:24:27.124 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.124 Verification LBA range: start 0x0 length 0x400 00:24:27.124 Nvme10n1 : 0.94 204.40 12.78 0.00 0.00 251262.01 21626.88 251658.24 00:24:27.124 =================================================================================================================== 00:24:27.124 Total : 2388.59 149.29 0.00 0.00 239171.26 6171.31 277872.64 00:24:27.385 16:32:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3186299 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.327 rmmod nvme_tcp 00:24:28.327 rmmod nvme_fabrics 00:24:28.327 rmmod nvme_keyring 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3186299 ']' 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3186299 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 3186299 ']' 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 3186299 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:28.327 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3186299 00:24:28.588 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:28.588 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:28.588 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3186299' 00:24:28.588 killing process with pid 3186299 00:24:28.588 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 3186299 00:24:28.588 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 3186299 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.849 16:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.762 00:24:30.762 real 0m7.859s 00:24:30.762 user 0m23.579s 00:24:30.762 sys 0m1.220s 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.762 ************************************ 00:24:30.762 END TEST nvmf_shutdown_tc2 00:24:30.762 ************************************ 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:30.762 ************************************ 00:24:30.762 START TEST nvmf_shutdown_tc3 00:24:30.762 ************************************ 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.762 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.024 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.024 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.024 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:24:31.286 00:24:31.286 --- 10.0.0.2 ping statistics --- 00:24:31.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.286 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:24:31.286 00:24:31.286 --- 10.0.0.1 ping statistics --- 00:24:31.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.286 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3187975 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3187975 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3187975 ']' 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:31.286 16:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.286 [2024-06-07 16:32:58.032734] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:31.286 [2024-06-07 16:32:58.032795] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.286 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.286 [2024-06-07 16:32:58.119941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.547 [2024-06-07 16:32:58.181372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.547 [2024-06-07 16:32:58.181410] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.547 [2024-06-07 16:32:58.181415] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.547 [2024-06-07 16:32:58.181420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.547 [2024-06-07 16:32:58.181424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.547 [2024-06-07 16:32:58.181533] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.548 [2024-06-07 16:32:58.181660] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.548 [2024-06-07 16:32:58.181784] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.548 [2024-06-07 16:32:58.181786] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.120 [2024-06-07 16:32:58.851576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.120 16:32:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.120 Malloc1 00:24:32.120 [2024-06-07 16:32:58.950338] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.120 Malloc2 00:24:32.381 Malloc3 00:24:32.381 Malloc4 00:24:32.381 Malloc5 00:24:32.381 Malloc6 00:24:32.381 Malloc7 00:24:32.381 Malloc8 00:24:32.643 Malloc9 00:24:32.643 Malloc10 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3188353 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3188353 /var/tmp/bdevperf.sock 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3188353 ']' 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.643 { 00:24:32.643 "params": { 00:24:32.643 "name": "Nvme$subsystem", 00:24:32.643 "trtype": "$TEST_TRANSPORT", 00:24:32.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.643 "adrfam": "ipv4", 00:24:32.643 "trsvcid": "$NVMF_PORT", 00:24:32.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.643 "hdgst": ${hdgst:-false}, 00:24:32.643 "ddgst": ${ddgst:-false} 00:24:32.643 }, 00:24:32.643 "method": "bdev_nvme_attach_controller" 00:24:32.643 } 00:24:32.643 EOF 00:24:32.643 )") 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.643 { 00:24:32.643 "params": { 00:24:32.643 "name": "Nvme$subsystem", 00:24:32.643 "trtype": "$TEST_TRANSPORT", 00:24:32.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.643 "adrfam": "ipv4", 00:24:32.643 "trsvcid": "$NVMF_PORT", 00:24:32.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.643 "hdgst": ${hdgst:-false}, 00:24:32.643 "ddgst": ${ddgst:-false} 00:24:32.643 }, 00:24:32.643 "method": "bdev_nvme_attach_controller" 00:24:32.643 } 00:24:32.643 EOF 00:24:32.643 )") 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.643 { 00:24:32.643 "params": { 00:24:32.643 "name": "Nvme$subsystem", 00:24:32.643 "trtype": "$TEST_TRANSPORT", 00:24:32.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.643 "adrfam": "ipv4", 00:24:32.643 "trsvcid": "$NVMF_PORT", 00:24:32.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.643 "hdgst": ${hdgst:-false}, 00:24:32.643 "ddgst": ${ddgst:-false} 00:24:32.643 }, 00:24:32.643 "method": "bdev_nvme_attach_controller" 00:24:32.643 } 00:24:32.643 EOF 00:24:32.643 )") 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.643 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.643 { 00:24:32.643 "params": { 00:24:32.643 "name": "Nvme$subsystem", 00:24:32.643 "trtype": "$TEST_TRANSPORT", 00:24:32.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.643 "adrfam": "ipv4", 00:24:32.643 "trsvcid": "$NVMF_PORT", 00:24:32.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.643 "hdgst": ${hdgst:-false}, 00:24:32.643 "ddgst": ${ddgst:-false} 00:24:32.643 }, 00:24:32.643 "method": "bdev_nvme_attach_controller" 00:24:32.643 } 00:24:32.643 EOF 00:24:32.643 )") 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.644 { 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme$subsystem", 00:24:32.644 "trtype": "$TEST_TRANSPORT", 00:24:32.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "$NVMF_PORT", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.644 "hdgst": ${hdgst:-false}, 00:24:32.644 "ddgst": ${ddgst:-false} 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 } 00:24:32.644 EOF 00:24:32.644 )") 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.644 { 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme$subsystem", 00:24:32.644 "trtype": "$TEST_TRANSPORT", 00:24:32.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "$NVMF_PORT", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.644 "hdgst": ${hdgst:-false}, 00:24:32.644 "ddgst": ${ddgst:-false} 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 } 00:24:32.644 EOF 00:24:32.644 )") 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 [2024-06-07 16:32:59.393003] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:32.644 [2024-06-07 16:32:59.393056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188353 ] 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.644 { 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme$subsystem", 00:24:32.644 "trtype": "$TEST_TRANSPORT", 00:24:32.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "$NVMF_PORT", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.644 "hdgst": ${hdgst:-false}, 00:24:32.644 "ddgst": ${ddgst:-false} 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 } 00:24:32.644 EOF 00:24:32.644 )") 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.644 { 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme$subsystem", 00:24:32.644 "trtype": "$TEST_TRANSPORT", 00:24:32.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "$NVMF_PORT", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.644 "hdgst": ${hdgst:-false}, 00:24:32.644 "ddgst": ${ddgst:-false} 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 } 00:24:32.644 EOF 00:24:32.644 )") 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.644 { 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme$subsystem", 00:24:32.644 "trtype": "$TEST_TRANSPORT", 00:24:32.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "$NVMF_PORT", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.644 "hdgst": ${hdgst:-false}, 00:24:32.644 "ddgst": ${ddgst:-false} 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 } 00:24:32.644 EOF 00:24:32.644 )") 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:32.644 { 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme$subsystem", 00:24:32.644 "trtype": "$TEST_TRANSPORT", 00:24:32.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "$NVMF_PORT", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.644 "hdgst": ${hdgst:-false}, 00:24:32.644 "ddgst": ${ddgst:-false} 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 } 00:24:32.644 EOF 00:24:32.644 )") 00:24:32.644 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:32.644 16:32:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme1", 00:24:32.644 "trtype": "tcp", 00:24:32.644 "traddr": "10.0.0.2", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "4420", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.644 "hdgst": false, 00:24:32.644 "ddgst": false 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 },{ 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme2", 00:24:32.644 "trtype": "tcp", 00:24:32.644 "traddr": "10.0.0.2", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "4420", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:32.644 "hdgst": false, 00:24:32.644 "ddgst": false 00:24:32.644 }, 00:24:32.644 "method": "bdev_nvme_attach_controller" 00:24:32.644 },{ 00:24:32.644 "params": { 00:24:32.644 "name": "Nvme3", 00:24:32.644 "trtype": "tcp", 00:24:32.644 "traddr": "10.0.0.2", 00:24:32.644 "adrfam": "ipv4", 00:24:32.644 "trsvcid": "4420", 00:24:32.644 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:32.644 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:32.644 "hdgst": false, 00:24:32.644 "ddgst": false 00:24:32.644 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme4", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme5", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme6", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme7", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme8", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme9", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 },{ 00:24:32.645 "params": { 00:24:32.645 "name": "Nvme10", 00:24:32.645 "trtype": "tcp", 00:24:32.645 "traddr": "10.0.0.2", 00:24:32.645 "adrfam": "ipv4", 00:24:32.645 "trsvcid": "4420", 00:24:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:32.645 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:32.645 "hdgst": false, 00:24:32.645 "ddgst": false 00:24:32.645 }, 00:24:32.645 "method": "bdev_nvme_attach_controller" 00:24:32.645 }' 00:24:32.645 [2024-06-07 16:32:59.452725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.906 [2024-06-07 16:32:59.517796] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.290 Running I/O for 10 seconds... 00:24:34.290 16:33:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:34.290 16:33:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:24:34.290 16:33:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:34.290 16:33:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.290 16:33:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:34.551 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:34.812 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3187975 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 3187975 ']' 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 3187975 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3187975 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3187975' 00:24:35.085 killing process with pid 3187975 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 3187975 00:24:35.085 16:33:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 3187975 00:24:35.085 [2024-06-07 16:33:01.878813] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878861] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878868] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878872] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878877] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878882] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878887] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878891] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878896] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878900] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878905] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878910] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878914] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878919] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878923] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878928] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878932] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878937] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878941] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878946] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878950] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878961] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878966] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878970] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878975] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878980] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878985] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878990] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878994] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.878999] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879004] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879008] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879013] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879017] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879022] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879026] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879031] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879035] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879040] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879044] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879049] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879053] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879059] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879063] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879067] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879072] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879076] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879081] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879086] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879091] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879095] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879100] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.085 [2024-06-07 16:33:01.879104] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879109] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879113] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879117] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879122] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879127] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879131] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879136] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879140] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879145] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.879149] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932020 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880404] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880428] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880433] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880439] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880444] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880449] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880454] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880459] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880464] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880470] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880474] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880479] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880484] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880499] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880503] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880508] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880513] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880518] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880522] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880527] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880531] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880536] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880540] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880545] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880549] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880554] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880558] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880563] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880568] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880572] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880577] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880582] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880586] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880590] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880595] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880599] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880604] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880608] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880613] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880617] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880623] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880632] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880637] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880642] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880646] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880651] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880655] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880660] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880664] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880669] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880674] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880678] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880682] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880687] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880692] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880696] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.086 [2024-06-07 16:33:01.880701] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.880705] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.880710] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.880714] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.880718] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.880723] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934a20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882587] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882609] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882614] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882619] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882632] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882636] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882642] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882646] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882651] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882656] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882660] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882665] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882669] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882673] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882677] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882682] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882687] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882691] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882695] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882700] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882704] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882708] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882713] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882717] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882722] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882726] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882731] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882735] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882739] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882744] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882748] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882797] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882802] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882807] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882811] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882815] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882820] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882824] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882829] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882833] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882837] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882842] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882846] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882851] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882856] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882860] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882864] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882869] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882874] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882878] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882882] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882887] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882892] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882896] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882901] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882906] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882910] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882915] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882920] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882924] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882929] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882933] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.882938] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932960 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883773] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883795] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883801] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883805] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883810] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883814] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883819] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883823] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883828] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883833] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883837] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883842] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883846] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883851] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883855] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.087 [2024-06-07 16:33:01.883860] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883864] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883869] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883873] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883878] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883882] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883887] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883983] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883988] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883992] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.883997] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884002] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884006] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884010] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884015] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884019] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884024] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884028] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884032] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884037] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884041] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884046] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884050] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884055] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884060] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884064] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884068] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884073] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884077] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884082] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884086] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884090] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884095] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884100] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884104] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884110] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884114] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884119] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884123] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884127] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884132] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884136] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884141] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884146] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884150] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884154] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884158] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884163] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884167] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x932e20 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884755] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884770] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884775] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884780] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884784] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884789] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884793] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884798] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884802] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884807] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884812] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884816] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884821] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884829] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884834] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884838] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884842] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884847] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884852] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884856] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884861] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884865] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884870] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884875] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884879] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884883] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884887] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884892] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884897] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884901] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884905] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884910] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884914] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.088 [2024-06-07 16:33:01.884919] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884923] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884927] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884931] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884936] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884940] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884944] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884950] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884954] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884958] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884963] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884967] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884971] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884976] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884980] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884984] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884988] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884992] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.884997] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885001] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885005] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885010] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885014] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885019] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885023] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885027] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885031] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885035] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885040] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885044] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9332c0 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885745] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885760] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885765] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885770] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885775] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885782] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885787] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885792] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885797] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885801] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885806] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885810] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885815] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885820] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885824] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885829] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885834] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885839] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885843] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885848] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885852] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885858] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885862] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885866] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885870] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885875] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885880] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885885] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885890] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885895] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885900] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885905] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885910] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885915] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885919] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885924] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.089 [2024-06-07 16:33:01.885929] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885934] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885938] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885942] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885947] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885951] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885955] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885959] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885964] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885968] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885972] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885977] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885981] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885985] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885990] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885994] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.885998] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886003] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886007] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886011] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886015] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886020] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886024] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886030] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886034] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886039] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.886043] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x933780 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887267] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887278] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887284] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887288] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887294] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887299] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887304] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887308] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887313] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887318] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887323] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887328] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887333] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887338] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887343] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887347] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887352] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887356] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887362] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887366] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887371] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887375] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887381] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887388] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887393] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887398] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887407] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887412] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887417] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887421] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887426] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887432] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887437] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887441] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887446] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887450] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887455] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887460] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887465] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887470] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887474] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887479] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887484] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887489] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887493] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887498] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887503] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887508] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887512] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887517] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887521] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887527] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887532] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887537] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887541] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887546] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887551] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887555] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.090 [2024-06-07 16:33:01.887559] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.887564] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.887568] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.887572] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.887577] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9340c0 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888021] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888034] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888039] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888044] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888049] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888053] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888058] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888063] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888067] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888072] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.888076] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.091 [2024-06-07 16:33:01.892816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.892990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.892997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.091 [2024-06-07 16:33:01.893377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.091 [2024-06-07 16:33:01.893386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.092 [2024-06-07 16:33:01.893913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.893941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:35.092 [2024-06-07 16:33:01.893984] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d2ad60 was disconnected and freed. reset controller. 00:24:35.092 [2024-06-07 16:33:01.894139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.894161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.894176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.894191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.894211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbca60 is same with the state(5) to be set 00:24:35.092 [2024-06-07 16:33:01.894245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.894261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.092 [2024-06-07 16:33:01.894276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.092 [2024-06-07 16:33:01.894283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0020 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97100 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2650 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3c310 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41280 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d900 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b00c50 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.093 [2024-06-07 16:33:01.894891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.894898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3beb0 is same with the state(5) to be set 00:24:35.093 [2024-06-07 16:33:01.894987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.894998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.895017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.895033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.895050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.895066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.895083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.093 [2024-06-07 16:33:01.895099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.093 [2024-06-07 16:33:01.895108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.895538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.094 [2024-06-07 16:33:01.895545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.094 [2024-06-07 16:33:01.896593] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896612] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896622] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896628] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896633] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896638] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896643] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896647] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896652] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896656] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896661] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896666] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896670] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896674] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.094 [2024-06-07 16:33:01.896679] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896683] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896688] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896693] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896697] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896702] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896706] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896710] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896715] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896719] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896724] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896728] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896732] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896737] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896741] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896747] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896751] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896756] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896761] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896765] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896770] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896774] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896779] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896783] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896788] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896792] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896797] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896801] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896806] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896810] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896814] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896819] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896823] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896828] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896832] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896837] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896841] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.896845] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934560 is same with the state(5) to be set 00:24:35.095 [2024-06-07 16:33:01.905734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.905985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.905991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.095 [2024-06-07 16:33:01.906108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.095 [2024-06-07 16:33:01.906118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.906256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.906265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d26f10 is same with the state(5) to be set 00:24:35.096 [2024-06-07 16:33:01.906313] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d26f10 was disconnected and freed. reset controller. 00:24:35.096 [2024-06-07 16:33:01.907856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:35.096 [2024-06-07 16:33:01.907890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b41280 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.907925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbca60 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.907964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.096 [2024-06-07 16:33:01.907976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.907987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.096 [2024-06-07 16:33:01.907995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.908004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.096 [2024-06-07 16:33:01.908011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.908019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.096 [2024-06-07 16:33:01.908025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.908032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89f70 is same with the state(5) to be set 00:24:35.096 [2024-06-07 16:33:01.908050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb0020 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.908066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97100 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.908079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2650 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.908093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3c310 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.908110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1d900 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.908125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b00c50 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.908145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3beb0 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.909669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:35.096 [2024-06-07 16:33:01.910794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.096 [2024-06-07 16:33:01.910832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b41280 with addr=10.0.0.2, port=4420 00:24:35.096 [2024-06-07 16:33:01.910845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41280 is same with the state(5) to be set 00:24:35.096 [2024-06-07 16:33:01.911241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.096 [2024-06-07 16:33:01.911253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbca60 with addr=10.0.0.2, port=4420 00:24:35.096 [2024-06-07 16:33:01.911261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbca60 is same with the state(5) to be set 00:24:35.096 [2024-06-07 16:33:01.911626] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.911674] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.911714] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.911753] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.911801] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.911876] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.911892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b41280 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.911905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbca60 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.911959] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.912074] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:35.096 [2024-06-07 16:33:01.912096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:35.096 [2024-06-07 16:33:01.912104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:35.096 [2024-06-07 16:33:01.912112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:35.096 [2024-06-07 16:33:01.912128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:35.096 [2024-06-07 16:33:01.912135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:35.096 [2024-06-07 16:33:01.912142] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:35.096 [2024-06-07 16:33:01.912202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.096 [2024-06-07 16:33:01.912211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.096 [2024-06-07 16:33:01.917882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89f70 (9): Bad file descriptor 00:24:35.096 [2024-06-07 16:33:01.918037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.096 [2024-06-07 16:33:01.918212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.096 [2024-06-07 16:33:01.918222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.097 [2024-06-07 16:33:01.918869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.097 [2024-06-07 16:33:01.918878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.918985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.918994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.919098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.919106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25c70 is same with the state(5) to be set 00:24:35.098 [2024-06-07 16:33:01.920390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.098 [2024-06-07 16:33:01.920715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.098 [2024-06-07 16:33:01.920724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.920991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.920998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.099 [2024-06-07 16:33:01.921378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.099 [2024-06-07 16:33:01.921386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.921407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.921424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.921440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.921456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.921472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.921490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.921498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d283a0 is same with the state(5) to be set 00:24:35.100 [2024-06-07 16:33:01.922777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.922987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.922996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.100 [2024-06-07 16:33:01.923342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.100 [2024-06-07 16:33:01.923351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.923875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.923883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aed620 is same with the state(5) to be set 00:24:35.101 [2024-06-07 16:33:01.925149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.101 [2024-06-07 16:33:01.925302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.101 [2024-06-07 16:33:01.925309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.102 [2024-06-07 16:33:01.925965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.102 [2024-06-07 16:33:01.925972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.925982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.925989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.925998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.103 [2024-06-07 16:33:01.926216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.103 [2024-06-07 16:33:01.926225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28580 is same with the state(5) to be set 00:24:35.369 [2024-06-07 16:33:01.927482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.369 [2024-06-07 16:33:01.927911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.369 [2024-06-07 16:33:01.927918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.927927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.927934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.927943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.927950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.927960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.927966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.927976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.927983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.927992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.927999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.928546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.928554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d29890 is same with the state(5) to be set 00:24:35.370 [2024-06-07 16:33:01.929821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.370 [2024-06-07 16:33:01.929835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.370 [2024-06-07 16:33:01.929847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.929984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.929991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.371 [2024-06-07 16:33:01.930502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.371 [2024-06-07 16:33:01.930509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.930890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.930899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c20a30 is same with the state(5) to be set 00:24:35.372 [2024-06-07 16:33:01.932169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.372 [2024-06-07 16:33:01.932449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.372 [2024-06-07 16:33:01.932458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.373 [2024-06-07 16:33:01.932960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.373 [2024-06-07 16:33:01.932969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.932977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.932985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.932993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.933238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.933246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c23200 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.934774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.374 [2024-06-07 16:33:01.934799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:35.374 [2024-06-07 16:33:01.934809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:35.374 [2024-06-07 16:33:01.934818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:35.374 [2024-06-07 16:33:01.934887] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.374 [2024-06-07 16:33:01.934905] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.374 [2024-06-07 16:33:01.934916] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.374 [2024-06-07 16:33:01.935003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:35.374 [2024-06-07 16:33:01.935014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:35.374 [2024-06-07 16:33:01.935023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:35.374 [2024-06-07 16:33:01.935610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.935652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af2650 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.935663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2650 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.936052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.936063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1d900 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.936070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d900 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.936602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.936640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b00c50 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.936652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b00c50 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.937066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.937078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3beb0 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.937085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3beb0 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.938962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:35.374 [2024-06-07 16:33:01.938978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:35.374 [2024-06-07 16:33:01.939379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.939391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3c310 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.939398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3c310 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.939952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.939989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97100 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.940001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97100 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.940398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.374 [2024-06-07 16:33:01.940418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb0020 with addr=10.0.0.2, port=4420 00:24:35.374 [2024-06-07 16:33:01.940426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0020 is same with the state(5) to be set 00:24:35.374 [2024-06-07 16:33:01.940440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2650 (9): Bad file descriptor 00:24:35.374 [2024-06-07 16:33:01.940450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1d900 (9): Bad file descriptor 00:24:35.374 [2024-06-07 16:33:01.940464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b00c50 (9): Bad file descriptor 00:24:35.374 [2024-06-07 16:33:01.940473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3beb0 (9): Bad file descriptor 00:24:35.374 [2024-06-07 16:33:01.940577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.940604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.940621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.940638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.940654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.940671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.374 [2024-06-07 16:33:01.940687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.374 [2024-06-07 16:33:01.940694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.940987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.940994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.375 [2024-06-07 16:33:01.941340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.375 [2024-06-07 16:33:01.941350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.376 [2024-06-07 16:33:01.941647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.376 [2024-06-07 16:33:01.941656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c21d30 is same with the state(5) to be set 00:24:35.376 task offset: 26496 on job bdev=Nvme7n1 fails 00:24:35.376 00:24:35.376 Latency(us) 00:24:35.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme1n1 ended in about 0.95 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme1n1 : 0.95 201.45 12.59 67.15 0.00 235565.44 15291.73 249910.61 00:24:35.376 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme2n1 ended in about 0.94 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme2n1 : 0.94 203.79 12.74 67.93 0.00 227978.45 16384.00 241172.48 00:24:35.376 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme3n1 ended in about 0.96 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme3n1 : 0.96 201.99 12.62 66.98 0.00 225554.34 19660.80 249910.61 00:24:35.376 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme4n1 ended in about 0.96 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme4n1 : 0.96 200.45 12.53 66.82 0.00 222213.76 20971.52 270882.13 00:24:35.376 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme5n1 ended in about 0.96 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme5n1 : 0.96 141.64 8.85 66.65 0.00 279095.98 22282.24 283115.52 00:24:35.376 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme6n1 ended in about 0.96 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme6n1 : 0.96 132.99 8.31 66.49 0.00 285029.26 22063.79 251658.24 00:24:35.376 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme7n1 ended in about 0.94 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme7n1 : 0.94 204.14 12.76 68.05 0.00 203342.93 13981.01 253405.87 00:24:35.376 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme8n1 ended in about 0.96 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme8n1 : 0.96 132.66 8.29 66.33 0.00 272905.39 22063.79 258648.75 00:24:35.376 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme9n1 ended in about 0.98 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme9n1 : 0.98 131.20 8.20 65.60 0.00 270100.20 22719.15 276125.01 00:24:35.376 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:35.376 Job: Nvme10n1 ended in about 0.97 seconds with error 00:24:35.376 Verification LBA range: start 0x0 length 0x400 00:24:35.376 Nvme10n1 : 0.97 132.34 8.27 66.17 0.00 260974.65 23702.19 255153.49 00:24:35.376 =================================================================================================================== 00:24:35.376 Total : 1682.65 105.17 668.17 0.00 244769.34 13981.01 283115.52 00:24:35.376 [2024-06-07 16:33:01.971138] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:35.376 [2024-06-07 16:33:01.971191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:35.376 [2024-06-07 16:33:01.971655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.376 [2024-06-07 16:33:01.971674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbca60 with addr=10.0.0.2, port=4420 00:24:35.376 [2024-06-07 16:33:01.971684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbca60 is same with the state(5) to be set 00:24:35.376 [2024-06-07 16:33:01.971904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.376 [2024-06-07 16:33:01.971914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b41280 with addr=10.0.0.2, port=4420 00:24:35.376 [2024-06-07 16:33:01.971921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41280 is same with the state(5) to be set 00:24:35.376 [2024-06-07 16:33:01.971933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3c310 (9): Bad file descriptor 00:24:35.376 [2024-06-07 16:33:01.971944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97100 (9): Bad file descriptor 00:24:35.376 [2024-06-07 16:33:01.971954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb0020 (9): Bad file descriptor 00:24:35.376 [2024-06-07 16:33:01.971963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.376 [2024-06-07 16:33:01.971969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.376 [2024-06-07 16:33:01.971978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.376 [2024-06-07 16:33:01.971992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:35.376 [2024-06-07 16:33:01.971999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:35.376 [2024-06-07 16:33:01.972005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:35.376 [2024-06-07 16:33:01.972016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:35.376 [2024-06-07 16:33:01.972022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:35.376 [2024-06-07 16:33:01.972029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:35.376 [2024-06-07 16:33:01.972040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.972047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.972053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:35.377 [2024-06-07 16:33:01.972175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.972186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.972193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.972198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.972472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.377 [2024-06-07 16:33:01.972484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89f70 with addr=10.0.0.2, port=4420 00:24:35.377 [2024-06-07 16:33:01.972498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89f70 is same with the state(5) to be set 00:24:35.377 [2024-06-07 16:33:01.972507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbca60 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.972516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b41280 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.972524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.972531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.972537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:35.377 [2024-06-07 16:33:01.972547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.972553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.972560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:35.377 [2024-06-07 16:33:01.972570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.972576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.972582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:35.377 [2024-06-07 16:33:01.972624] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.377 [2024-06-07 16:33:01.972634] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.377 [2024-06-07 16:33:01.972644] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.377 [2024-06-07 16:33:01.972662] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.377 [2024-06-07 16:33:01.972672] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:35.377 [2024-06-07 16:33:01.972976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.972986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.972991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.973012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89f70 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.973020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.973026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.973033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:35.377 [2024-06-07 16:33:01.973044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.973050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.973056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:35.377 [2024-06-07 16:33:01.973464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:35.377 [2024-06-07 16:33:01.973478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:35.377 [2024-06-07 16:33:01.973486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:35.377 [2024-06-07 16:33:01.973498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:35.377 [2024-06-07 16:33:01.973506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.973512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.973540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.973547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.973554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:35.377 [2024-06-07 16:33:01.973589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.974027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.377 [2024-06-07 16:33:01.974039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3beb0 with addr=10.0.0.2, port=4420 00:24:35.377 [2024-06-07 16:33:01.974047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3beb0 is same with the state(5) to be set 00:24:35.377 [2024-06-07 16:33:01.974453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.377 [2024-06-07 16:33:01.974463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b00c50 with addr=10.0.0.2, port=4420 00:24:35.377 [2024-06-07 16:33:01.974470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b00c50 is same with the state(5) to be set 00:24:35.377 [2024-06-07 16:33:01.974867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.377 [2024-06-07 16:33:01.974877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1d900 with addr=10.0.0.2, port=4420 00:24:35.377 [2024-06-07 16:33:01.974883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d900 is same with the state(5) to be set 00:24:35.377 [2024-06-07 16:33:01.975338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.377 [2024-06-07 16:33:01.975348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af2650 with addr=10.0.0.2, port=4420 00:24:35.377 [2024-06-07 16:33:01.975354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2650 is same with the state(5) to be set 00:24:35.377 [2024-06-07 16:33:01.975383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3beb0 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.975393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b00c50 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.975408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1d900 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.975417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af2650 (9): Bad file descriptor 00:24:35.377 [2024-06-07 16:33:01.975442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.975449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.975456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:35.377 [2024-06-07 16:33:01.975465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.975471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.975478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:35.377 [2024-06-07 16:33:01.975487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.975496] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.975502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:35.377 [2024-06-07 16:33:01.975512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:35.377 [2024-06-07 16:33:01.975518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:35.377 [2024-06-07 16:33:01.975524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:35.377 [2024-06-07 16:33:01.975553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.975560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.975565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 [2024-06-07 16:33:01.975571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:35.377 16:33:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:35.377 16:33:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3188353 00:24:36.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3188353) - No such process 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.318 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.585 rmmod nvme_tcp 00:24:36.585 rmmod nvme_fabrics 00:24:36.585 rmmod nvme_keyring 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.585 16:33:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.505 16:33:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.505 00:24:38.505 real 0m7.685s 00:24:38.505 user 0m18.625s 00:24:38.505 sys 0m1.224s 00:24:38.505 16:33:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:38.505 16:33:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:38.505 ************************************ 00:24:38.505 END TEST nvmf_shutdown_tc3 00:24:38.505 ************************************ 00:24:38.505 16:33:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:38.505 00:24:38.505 real 0m32.328s 00:24:38.505 user 1m15.703s 00:24:38.505 sys 0m9.284s 00:24:38.505 16:33:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:38.505 16:33:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:38.505 ************************************ 00:24:38.505 END TEST nvmf_shutdown 00:24:38.505 ************************************ 00:24:38.767 16:33:05 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_exit target 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.767 16:33:05 nvmf_tcp -- nvmf/nvmf.sh@89 -- # timing_enter host 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.767 16:33:05 nvmf_tcp -- nvmf/nvmf.sh@91 -- # [[ 0 -eq 0 ]] 00:24:38.767 16:33:05 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:38.767 16:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.767 ************************************ 00:24:38.767 START TEST nvmf_multicontroller 00:24:38.767 ************************************ 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:38.767 * Looking for test storage... 00:24:38.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:38.767 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.768 16:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.943 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:46.944 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:46.944 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:46.944 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:46.944 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:24:46.944 00:24:46.944 --- 10.0.0.2 ping statistics --- 00:24:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.944 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:24:46.944 00:24:46.944 --- 10.0.0.1 ping statistics --- 00:24:46.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.944 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3193737 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3193737 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 3193737 ']' 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:46.944 16:33:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.944 [2024-06-07 16:33:12.803464] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:46.944 [2024-06-07 16:33:12.803530] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.944 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.944 [2024-06-07 16:33:12.891068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:46.944 [2024-06-07 16:33:12.986201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.944 [2024-06-07 16:33:12.986257] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.944 [2024-06-07 16:33:12.986265] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.944 [2024-06-07 16:33:12.986272] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.944 [2024-06-07 16:33:12.986278] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.944 [2024-06-07 16:33:12.986439] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.944 [2024-06-07 16:33:12.986591] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.944 [2024-06-07 16:33:12.986591] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.944 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 [2024-06-07 16:33:13.639885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 Malloc0 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 [2024-06-07 16:33:13.709765] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 [2024-06-07 16:33:13.721731] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 Malloc1 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3194029 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3194029 /var/tmp/bdevperf.sock 00:24:46.945 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 3194029 ']' 00:24:47.205 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.205 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:47.205 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.205 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:47.205 16:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.776 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:47.776 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:24:47.776 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:47.776 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.777 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.037 NVMe0n1 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.037 1 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.037 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.298 request: 00:24:48.298 { 00:24:48.298 "name": "NVMe0", 00:24:48.298 "trtype": "tcp", 00:24:48.298 "traddr": "10.0.0.2", 00:24:48.298 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:48.298 "hostaddr": "10.0.0.2", 00:24:48.298 "hostsvcid": "60000", 00:24:48.298 "adrfam": "ipv4", 00:24:48.298 "trsvcid": "4420", 00:24:48.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.298 "method": "bdev_nvme_attach_controller", 00:24:48.298 "req_id": 1 00:24:48.298 } 00:24:48.298 Got JSON-RPC error response 00:24:48.298 response: 00:24:48.298 { 00:24:48.298 "code": -114, 00:24:48.298 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:48.298 } 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.298 request: 00:24:48.298 { 00:24:48.298 "name": "NVMe0", 00:24:48.298 "trtype": "tcp", 00:24:48.298 "traddr": "10.0.0.2", 00:24:48.298 "hostaddr": "10.0.0.2", 00:24:48.298 "hostsvcid": "60000", 00:24:48.298 "adrfam": "ipv4", 00:24:48.298 "trsvcid": "4420", 00:24:48.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.298 "method": "bdev_nvme_attach_controller", 00:24:48.298 "req_id": 1 00:24:48.298 } 00:24:48.298 Got JSON-RPC error response 00:24:48.298 response: 00:24:48.298 { 00:24:48.298 "code": -114, 00:24:48.298 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:48.298 } 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:48.298 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 request: 00:24:48.299 { 00:24:48.299 "name": "NVMe0", 00:24:48.299 "trtype": "tcp", 00:24:48.299 "traddr": "10.0.0.2", 00:24:48.299 "hostaddr": "10.0.0.2", 00:24:48.299 "hostsvcid": "60000", 00:24:48.299 "adrfam": "ipv4", 00:24:48.299 "trsvcid": "4420", 00:24:48.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.299 "multipath": "disable", 00:24:48.299 "method": "bdev_nvme_attach_controller", 00:24:48.299 "req_id": 1 00:24:48.299 } 00:24:48.299 Got JSON-RPC error response 00:24:48.299 response: 00:24:48.299 { 00:24:48.299 "code": -114, 00:24:48.299 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:48.299 } 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 request: 00:24:48.299 { 00:24:48.299 "name": "NVMe0", 00:24:48.299 "trtype": "tcp", 00:24:48.299 "traddr": "10.0.0.2", 00:24:48.299 "hostaddr": "10.0.0.2", 00:24:48.299 "hostsvcid": "60000", 00:24:48.299 "adrfam": "ipv4", 00:24:48.299 "trsvcid": "4420", 00:24:48.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.299 "multipath": "failover", 00:24:48.299 "method": "bdev_nvme_attach_controller", 00:24:48.299 "req_id": 1 00:24:48.299 } 00:24:48.299 Got JSON-RPC error response 00:24:48.299 response: 00:24:48.299 { 00:24:48.299 "code": -114, 00:24:48.299 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:48.299 } 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.299 16:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:48.299 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:48.559 16:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.559 16:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:48.559 16:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.501 0 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3194029 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 3194029 ']' 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 3194029 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3194029 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3194029' 00:24:49.501 killing process with pid 3194029 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 3194029 00:24:49.501 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 3194029 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:49.761 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:24:49.761 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:49.761 [2024-06-07 16:33:13.841516] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:49.761 [2024-06-07 16:33:13.841570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194029 ] 00:24:49.761 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.761 [2024-06-07 16:33:13.902812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.761 [2024-06-07 16:33:13.967506] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.762 [2024-06-07 16:33:15.141702] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 86f72c4b-9fbb-4e56-a16e-ecb5f40d5501 already exists 00:24:49.762 [2024-06-07 16:33:15.141731] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:86f72c4b-9fbb-4e56-a16e-ecb5f40d5501 alias for bdev NVMe1n1 00:24:49.762 [2024-06-07 16:33:15.141741] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:49.762 Running I/O for 1 seconds... 00:24:49.762 00:24:49.762 Latency(us) 00:24:49.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.762 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:49.762 NVMe0n1 : 1.01 20307.11 79.32 0.00 0.00 6286.27 4014.08 11796.48 00:24:49.762 =================================================================================================================== 00:24:49.762 Total : 20307.11 79.32 0.00 0.00 6286.27 4014.08 11796.48 00:24:49.762 Received shutdown signal, test time was about 1.000000 seconds 00:24:49.762 00:24:49.762 Latency(us) 00:24:49.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.762 =================================================================================================================== 00:24:49.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.762 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.762 rmmod nvme_tcp 00:24:49.762 rmmod nvme_fabrics 00:24:49.762 rmmod nvme_keyring 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3193737 ']' 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3193737 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 3193737 ']' 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 3193737 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:49.762 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3193737 00:24:50.021 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:50.021 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3193737' 00:24:50.022 killing process with pid 3193737 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 3193737 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 3193737 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.022 16:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.568 16:33:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.568 00:24:52.568 real 0m13.388s 00:24:52.568 user 0m16.355s 00:24:52.568 sys 0m6.112s 00:24:52.568 16:33:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:52.568 16:33:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:52.568 ************************************ 00:24:52.568 END TEST nvmf_multicontroller 00:24:52.568 ************************************ 00:24:52.568 16:33:18 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:52.568 16:33:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:52.568 16:33:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:52.568 16:33:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.568 ************************************ 00:24:52.568 START TEST nvmf_aer 00:24:52.568 ************************************ 00:24:52.568 16:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:52.568 * Looking for test storage... 00:24:52.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.568 16:33:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.569 16:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:59.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:59.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:59.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:59.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.171 16:33:25 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.171 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:24:59.171 00:24:59.171 --- 10.0.0.2 ping statistics --- 00:24:59.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.171 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:24:59.171 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:59.172 00:24:59.172 --- 10.0.0.1 ping statistics --- 00:24:59.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.172 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:59.172 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.172 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:59.172 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.432 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3198718 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3198718 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 3198718 ']' 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:59.433 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.433 [2024-06-07 16:33:26.119357] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:24:59.433 [2024-06-07 16:33:26.119411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.433 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.433 [2024-06-07 16:33:26.186016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.433 [2024-06-07 16:33:26.251547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.433 [2024-06-07 16:33:26.251583] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.433 [2024-06-07 16:33:26.251590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.433 [2024-06-07 16:33:26.251597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.433 [2024-06-07 16:33:26.251602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.433 [2024-06-07 16:33:26.255418] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.433 [2024-06-07 16:33:26.255481] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.433 [2024-06-07 16:33:26.255767] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.433 [2024-06-07 16:33:26.255768] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 [2024-06-07 16:33:26.400262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 Malloc0 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 [2024-06-07 16:33:26.457203] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 [ 00:24:59.694 { 00:24:59.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:59.694 "subtype": "Discovery", 00:24:59.694 "listen_addresses": [], 00:24:59.694 "allow_any_host": true, 00:24:59.694 "hosts": [] 00:24:59.694 }, 00:24:59.694 { 00:24:59.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.694 "subtype": "NVMe", 00:24:59.694 "listen_addresses": [ 00:24:59.694 { 00:24:59.694 "trtype": "TCP", 00:24:59.694 "adrfam": "IPv4", 00:24:59.694 "traddr": "10.0.0.2", 00:24:59.694 "trsvcid": "4420" 00:24:59.694 } 00:24:59.694 ], 00:24:59.694 "allow_any_host": true, 00:24:59.694 "hosts": [], 00:24:59.694 "serial_number": "SPDK00000000000001", 00:24:59.694 "model_number": "SPDK bdev Controller", 00:24:59.694 "max_namespaces": 2, 00:24:59.694 "min_cntlid": 1, 00:24:59.694 "max_cntlid": 65519, 00:24:59.694 "namespaces": [ 00:24:59.694 { 00:24:59.694 "nsid": 1, 00:24:59.694 "bdev_name": "Malloc0", 00:24:59.694 "name": "Malloc0", 00:24:59.694 "nguid": "F2329D07D94449F18CB300F4D70096F9", 00:24:59.694 "uuid": "f2329d07-d944-49f1-8cb3-00f4d70096f9" 00:24:59.694 } 00:24:59.694 ] 00:24:59.694 } 00:24:59.694 ] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3198746 00:24:59.694 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:59.695 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:59.695 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:24:59.695 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.695 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:24:59.695 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:24:59.695 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:24:59.695 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:59.955 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:59.956 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.217 Malloc1 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.217 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.217 [ 00:25:00.217 { 00:25:00.217 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:00.217 "subtype": "Discovery", 00:25:00.217 "listen_addresses": [], 00:25:00.217 "allow_any_host": true, 00:25:00.217 "hosts": [] 00:25:00.217 }, 00:25:00.217 { 00:25:00.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.217 "subtype": "NVMe", 00:25:00.217 "listen_addresses": [ 00:25:00.217 { 00:25:00.217 "trtype": "TCP", 00:25:00.217 "adrfam": "IPv4", 00:25:00.217 "traddr": "10.0.0.2", 00:25:00.217 "trsvcid": "4420" 00:25:00.217 } 00:25:00.217 ], 00:25:00.217 "allow_any_host": true, 00:25:00.217 "hosts": [], 00:25:00.217 "serial_number": "SPDK00000000000001", 00:25:00.217 "model_number": "SPDK bdev Controller", 00:25:00.217 "max_namespaces": 2, 00:25:00.217 "min_cntlid": 1, 00:25:00.217 "max_cntlid": 65519, 00:25:00.217 "namespaces": [ 00:25:00.217 { 00:25:00.217 "nsid": 1, 00:25:00.217 "bdev_name": "Malloc0", 00:25:00.217 "name": "Malloc0", 00:25:00.217 "nguid": "F2329D07D94449F18CB300F4D70096F9", 00:25:00.217 "uuid": "f2329d07-d944-49f1-8cb3-00f4d70096f9" 00:25:00.217 }, 00:25:00.217 { 00:25:00.217 "nsid": 2, 00:25:00.217 Asynchronous Event Request test 00:25:00.218 Attaching to 10.0.0.2 00:25:00.218 Attached to 10.0.0.2 00:25:00.218 Registering asynchronous event callbacks... 00:25:00.218 Starting namespace attribute notice tests for all controllers... 00:25:00.218 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:00.218 aer_cb - Changed Namespace 00:25:00.218 Cleaning up... 00:25:00.218 "bdev_name": "Malloc1", 00:25:00.218 "name": "Malloc1", 00:25:00.218 "nguid": "F37A8519CF2B4E9AAEB814E74DFE2A6F", 00:25:00.218 "uuid": "f37a8519-cf2b-4e9a-aeb8-14e74dfe2a6f" 00:25:00.218 } 00:25:00.218 ] 00:25:00.218 } 00:25:00.218 ] 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3198746 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.218 rmmod nvme_tcp 00:25:00.218 rmmod nvme_fabrics 00:25:00.218 rmmod nvme_keyring 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3198718 ']' 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3198718 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 3198718 ']' 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 3198718 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:00.218 16:33:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3198718 00:25:00.218 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:00.218 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:00.218 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3198718' 00:25:00.218 killing process with pid 3198718 00:25:00.218 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 3198718 00:25:00.218 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 3198718 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.479 16:33:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.025 16:33:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.025 00:25:03.025 real 0m10.321s 00:25:03.025 user 0m5.711s 00:25:03.025 sys 0m5.543s 00:25:03.025 16:33:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:03.025 16:33:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.025 ************************************ 00:25:03.025 END TEST nvmf_aer 00:25:03.025 ************************************ 00:25:03.025 16:33:29 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:03.025 16:33:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:03.025 16:33:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:03.025 16:33:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.025 ************************************ 00:25:03.025 START TEST nvmf_async_init 00:25:03.025 ************************************ 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:03.025 * Looking for test storage... 00:25:03.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.025 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=24f094b726654b24ab38786dd2c0b5d3 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.026 16:33:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.678 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.678 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:09.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:09.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:09.679 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:09.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:09.679 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:09.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:25:09.941 00:25:09.941 --- 10.0.0.2 ping statistics --- 00:25:09.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.941 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:25:09.941 00:25:09.941 --- 10.0.0.1 ping statistics --- 00:25:09.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.941 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3203064 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3203064 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 3203064 ']' 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:09.941 16:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.941 [2024-06-07 16:33:36.746391] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:25:09.941 [2024-06-07 16:33:36.746459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.941 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.202 [2024-06-07 16:33:36.814679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.202 [2024-06-07 16:33:36.883305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.202 [2024-06-07 16:33:36.883343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.202 [2024-06-07 16:33:36.883351] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.202 [2024-06-07 16:33:36.883357] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.202 [2024-06-07 16:33:36.883363] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.202 [2024-06-07 16:33:36.883383] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 [2024-06-07 16:33:37.570065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 null0 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 24f094b726654b24ab38786dd2c0b5d3 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.773 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.773 [2024-06-07 16:33:37.622314] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 nvme0n1 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 [ 00:25:11.034 { 00:25:11.034 "name": "nvme0n1", 00:25:11.034 "aliases": [ 00:25:11.034 "24f094b7-2665-4b24-ab38-786dd2c0b5d3" 00:25:11.034 ], 00:25:11.034 "product_name": "NVMe disk", 00:25:11.034 "block_size": 512, 00:25:11.034 "num_blocks": 2097152, 00:25:11.034 "uuid": "24f094b7-2665-4b24-ab38-786dd2c0b5d3", 00:25:11.034 "assigned_rate_limits": { 00:25:11.034 "rw_ios_per_sec": 0, 00:25:11.034 "rw_mbytes_per_sec": 0, 00:25:11.034 "r_mbytes_per_sec": 0, 00:25:11.034 "w_mbytes_per_sec": 0 00:25:11.034 }, 00:25:11.034 "claimed": false, 00:25:11.034 "zoned": false, 00:25:11.034 "supported_io_types": { 00:25:11.034 "read": true, 00:25:11.034 "write": true, 00:25:11.034 "unmap": false, 00:25:11.034 "write_zeroes": true, 00:25:11.034 "flush": true, 00:25:11.034 "reset": true, 00:25:11.034 "compare": true, 00:25:11.034 "compare_and_write": true, 00:25:11.034 "abort": true, 00:25:11.034 "nvme_admin": true, 00:25:11.034 "nvme_io": true 00:25:11.034 }, 00:25:11.034 "memory_domains": [ 00:25:11.034 { 00:25:11.034 "dma_device_id": "system", 00:25:11.034 "dma_device_type": 1 00:25:11.034 } 00:25:11.034 ], 00:25:11.034 "driver_specific": { 00:25:11.034 "nvme": [ 00:25:11.034 { 00:25:11.034 "trid": { 00:25:11.034 "trtype": "TCP", 00:25:11.034 "adrfam": "IPv4", 00:25:11.034 "traddr": "10.0.0.2", 00:25:11.034 "trsvcid": "4420", 00:25:11.034 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:11.034 }, 00:25:11.034 "ctrlr_data": { 00:25:11.034 "cntlid": 1, 00:25:11.034 "vendor_id": "0x8086", 00:25:11.034 "model_number": "SPDK bdev Controller", 00:25:11.034 "serial_number": "00000000000000000000", 00:25:11.034 "firmware_revision": "24.09", 00:25:11.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.034 "oacs": { 00:25:11.034 "security": 0, 00:25:11.034 "format": 0, 00:25:11.034 "firmware": 0, 00:25:11.034 "ns_manage": 0 00:25:11.034 }, 00:25:11.034 "multi_ctrlr": true, 00:25:11.034 "ana_reporting": false 00:25:11.034 }, 00:25:11.034 "vs": { 00:25:11.034 "nvme_version": "1.3" 00:25:11.034 }, 00:25:11.034 "ns_data": { 00:25:11.034 "id": 1, 00:25:11.034 "can_share": true 00:25:11.034 } 00:25:11.034 } 00:25:11.034 ], 00:25:11.034 "mp_policy": "active_passive" 00:25:11.034 } 00:25:11.034 } 00:25:11.034 ] 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.034 16:33:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:11.295 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.295 16:33:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.295 [2024-06-07 16:33:37.890853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.295 [2024-06-07 16:33:37.890914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef3b20 (9): Bad file descriptor 00:25:11.295 [2024-06-07 16:33:38.022498] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.296 [ 00:25:11.296 { 00:25:11.296 "name": "nvme0n1", 00:25:11.296 "aliases": [ 00:25:11.296 "24f094b7-2665-4b24-ab38-786dd2c0b5d3" 00:25:11.296 ], 00:25:11.296 "product_name": "NVMe disk", 00:25:11.296 "block_size": 512, 00:25:11.296 "num_blocks": 2097152, 00:25:11.296 "uuid": "24f094b7-2665-4b24-ab38-786dd2c0b5d3", 00:25:11.296 "assigned_rate_limits": { 00:25:11.296 "rw_ios_per_sec": 0, 00:25:11.296 "rw_mbytes_per_sec": 0, 00:25:11.296 "r_mbytes_per_sec": 0, 00:25:11.296 "w_mbytes_per_sec": 0 00:25:11.296 }, 00:25:11.296 "claimed": false, 00:25:11.296 "zoned": false, 00:25:11.296 "supported_io_types": { 00:25:11.296 "read": true, 00:25:11.296 "write": true, 00:25:11.296 "unmap": false, 00:25:11.296 "write_zeroes": true, 00:25:11.296 "flush": true, 00:25:11.296 "reset": true, 00:25:11.296 "compare": true, 00:25:11.296 "compare_and_write": true, 00:25:11.296 "abort": true, 00:25:11.296 "nvme_admin": true, 00:25:11.296 "nvme_io": true 00:25:11.296 }, 00:25:11.296 "memory_domains": [ 00:25:11.296 { 00:25:11.296 "dma_device_id": "system", 00:25:11.296 "dma_device_type": 1 00:25:11.296 } 00:25:11.296 ], 00:25:11.296 "driver_specific": { 00:25:11.296 "nvme": [ 00:25:11.296 { 00:25:11.296 "trid": { 00:25:11.296 "trtype": "TCP", 00:25:11.296 "adrfam": "IPv4", 00:25:11.296 "traddr": "10.0.0.2", 00:25:11.296 "trsvcid": "4420", 00:25:11.296 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:11.296 }, 00:25:11.296 "ctrlr_data": { 00:25:11.296 "cntlid": 2, 00:25:11.296 "vendor_id": "0x8086", 00:25:11.296 "model_number": "SPDK bdev Controller", 00:25:11.296 "serial_number": "00000000000000000000", 00:25:11.296 "firmware_revision": "24.09", 00:25:11.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.296 "oacs": { 00:25:11.296 "security": 0, 00:25:11.296 "format": 0, 00:25:11.296 "firmware": 0, 00:25:11.296 "ns_manage": 0 00:25:11.296 }, 00:25:11.296 "multi_ctrlr": true, 00:25:11.296 "ana_reporting": false 00:25:11.296 }, 00:25:11.296 "vs": { 00:25:11.296 "nvme_version": "1.3" 00:25:11.296 }, 00:25:11.296 "ns_data": { 00:25:11.296 "id": 1, 00:25:11.296 "can_share": true 00:25:11.296 } 00:25:11.296 } 00:25:11.296 ], 00:25:11.296 "mp_policy": "active_passive" 00:25:11.296 } 00:25:11.296 } 00:25:11.296 ] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3M3E5GcP0V 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3M3E5GcP0V 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.296 [2024-06-07 16:33:38.091487] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:11.296 [2024-06-07 16:33:38.091610] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3M3E5GcP0V 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.296 [2024-06-07 16:33:38.103510] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3M3E5GcP0V 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.296 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.296 [2024-06-07 16:33:38.115544] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.296 [2024-06-07 16:33:38.115581] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:11.557 nvme0n1 00:25:11.557 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.557 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:11.557 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.557 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.557 [ 00:25:11.557 { 00:25:11.557 "name": "nvme0n1", 00:25:11.557 "aliases": [ 00:25:11.557 "24f094b7-2665-4b24-ab38-786dd2c0b5d3" 00:25:11.557 ], 00:25:11.558 "product_name": "NVMe disk", 00:25:11.558 "block_size": 512, 00:25:11.558 "num_blocks": 2097152, 00:25:11.558 "uuid": "24f094b7-2665-4b24-ab38-786dd2c0b5d3", 00:25:11.558 "assigned_rate_limits": { 00:25:11.558 "rw_ios_per_sec": 0, 00:25:11.558 "rw_mbytes_per_sec": 0, 00:25:11.558 "r_mbytes_per_sec": 0, 00:25:11.558 "w_mbytes_per_sec": 0 00:25:11.558 }, 00:25:11.558 "claimed": false, 00:25:11.558 "zoned": false, 00:25:11.558 "supported_io_types": { 00:25:11.558 "read": true, 00:25:11.558 "write": true, 00:25:11.558 "unmap": false, 00:25:11.558 "write_zeroes": true, 00:25:11.558 "flush": true, 00:25:11.558 "reset": true, 00:25:11.558 "compare": true, 00:25:11.558 "compare_and_write": true, 00:25:11.558 "abort": true, 00:25:11.558 "nvme_admin": true, 00:25:11.558 "nvme_io": true 00:25:11.558 }, 00:25:11.558 "memory_domains": [ 00:25:11.558 { 00:25:11.558 "dma_device_id": "system", 00:25:11.558 "dma_device_type": 1 00:25:11.558 } 00:25:11.558 ], 00:25:11.558 "driver_specific": { 00:25:11.558 "nvme": [ 00:25:11.558 { 00:25:11.558 "trid": { 00:25:11.558 "trtype": "TCP", 00:25:11.558 "adrfam": "IPv4", 00:25:11.558 "traddr": "10.0.0.2", 00:25:11.558 "trsvcid": "4421", 00:25:11.558 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:11.558 }, 00:25:11.558 "ctrlr_data": { 00:25:11.558 "cntlid": 3, 00:25:11.558 "vendor_id": "0x8086", 00:25:11.558 "model_number": "SPDK bdev Controller", 00:25:11.558 "serial_number": "00000000000000000000", 00:25:11.558 "firmware_revision": "24.09", 00:25:11.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.558 "oacs": { 00:25:11.558 "security": 0, 00:25:11.558 "format": 0, 00:25:11.558 "firmware": 0, 00:25:11.558 "ns_manage": 0 00:25:11.558 }, 00:25:11.558 "multi_ctrlr": true, 00:25:11.558 "ana_reporting": false 00:25:11.558 }, 00:25:11.558 "vs": { 00:25:11.558 "nvme_version": "1.3" 00:25:11.558 }, 00:25:11.558 "ns_data": { 00:25:11.558 "id": 1, 00:25:11.558 "can_share": true 00:25:11.558 } 00:25:11.558 } 00:25:11.558 ], 00:25:11.558 "mp_policy": "active_passive" 00:25:11.558 } 00:25:11.558 } 00:25:11.558 ] 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3M3E5GcP0V 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.558 rmmod nvme_tcp 00:25:11.558 rmmod nvme_fabrics 00:25:11.558 rmmod nvme_keyring 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3203064 ']' 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3203064 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 3203064 ']' 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 3203064 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3203064 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3203064' 00:25:11.558 killing process with pid 3203064 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 3203064 00:25:11.558 [2024-06-07 16:33:38.344106] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:11.558 [2024-06-07 16:33:38.344132] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:11.558 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 3203064 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.818 16:33:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.735 16:33:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.735 00:25:13.735 real 0m11.190s 00:25:13.735 user 0m3.990s 00:25:13.735 sys 0m5.649s 00:25:13.735 16:33:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:13.735 16:33:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 ************************************ 00:25:13.735 END TEST nvmf_async_init 00:25:13.735 ************************************ 00:25:13.735 16:33:40 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:13.735 16:33:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:13.735 16:33:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:13.735 16:33:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.997 ************************************ 00:25:13.997 START TEST dma 00:25:13.997 ************************************ 00:25:13.997 16:33:40 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:13.997 * Looking for test storage... 00:25:13.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.997 16:33:40 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.997 16:33:40 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.997 16:33:40 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.997 16:33:40 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.997 16:33:40 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.997 16:33:40 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.997 16:33:40 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.997 16:33:40 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:13.997 16:33:40 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.997 16:33:40 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.997 16:33:40 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:13.997 16:33:40 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:13.997 00:25:13.997 real 0m0.127s 00:25:13.997 user 0m0.053s 00:25:13.997 sys 0m0.082s 00:25:13.997 16:33:40 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:13.997 16:33:40 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:13.997 ************************************ 00:25:13.997 END TEST dma 00:25:13.997 ************************************ 00:25:13.997 16:33:40 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:13.997 16:33:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:13.997 16:33:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:13.997 16:33:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.997 ************************************ 00:25:13.997 START TEST nvmf_identify 00:25:13.997 ************************************ 00:25:13.997 16:33:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:14.259 * Looking for test storage... 00:25:14.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.259 16:33:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:20.849 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:20.849 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:20.849 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:20.849 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:25:20.849 00:25:20.849 --- 10.0.0.2 ping statistics --- 00:25:20.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.849 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:25:20.849 00:25:20.849 --- 10.0.0.1 ping statistics --- 00:25:20.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.849 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:20.849 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3207459 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3207459 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 3207459 ']' 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 16:33:47 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.850 [2024-06-07 16:33:47.569871] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:25:20.850 [2024-06-07 16:33:47.569935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.850 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.850 [2024-06-07 16:33:47.641512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.110 [2024-06-07 16:33:47.717280] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.110 [2024-06-07 16:33:47.717317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.110 [2024-06-07 16:33:47.717325] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.110 [2024-06-07 16:33:47.717331] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.110 [2024-06-07 16:33:47.717336] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.110 [2024-06-07 16:33:47.717478] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.110 [2024-06-07 16:33:47.717707] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.110 [2024-06-07 16:33:47.717864] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.110 [2024-06-07 16:33:47.717864] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.681 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 [2024-06-07 16:33:48.344787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 Malloc0 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 [2024-06-07 16:33:48.444258] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.682 [ 00:25:21.682 { 00:25:21.682 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.682 "subtype": "Discovery", 00:25:21.682 "listen_addresses": [ 00:25:21.682 { 00:25:21.682 "trtype": "TCP", 00:25:21.682 "adrfam": "IPv4", 00:25:21.682 "traddr": "10.0.0.2", 00:25:21.682 "trsvcid": "4420" 00:25:21.682 } 00:25:21.682 ], 00:25:21.682 "allow_any_host": true, 00:25:21.682 "hosts": [] 00:25:21.682 }, 00:25:21.682 { 00:25:21.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.682 "subtype": "NVMe", 00:25:21.682 "listen_addresses": [ 00:25:21.682 { 00:25:21.682 "trtype": "TCP", 00:25:21.682 "adrfam": "IPv4", 00:25:21.682 "traddr": "10.0.0.2", 00:25:21.682 "trsvcid": "4420" 00:25:21.682 } 00:25:21.682 ], 00:25:21.682 "allow_any_host": true, 00:25:21.682 "hosts": [], 00:25:21.682 "serial_number": "SPDK00000000000001", 00:25:21.682 "model_number": "SPDK bdev Controller", 00:25:21.682 "max_namespaces": 32, 00:25:21.682 "min_cntlid": 1, 00:25:21.682 "max_cntlid": 65519, 00:25:21.682 "namespaces": [ 00:25:21.682 { 00:25:21.682 "nsid": 1, 00:25:21.682 "bdev_name": "Malloc0", 00:25:21.682 "name": "Malloc0", 00:25:21.682 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:21.682 "eui64": "ABCDEF0123456789", 00:25:21.682 "uuid": "37bf36a2-4da3-496b-bf0f-7e94fe628274" 00:25:21.682 } 00:25:21.682 ] 00:25:21.682 } 00:25:21.682 ] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.682 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:21.682 [2024-06-07 16:33:48.504304] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:25:21.682 [2024-06-07 16:33:48.504344] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207595 ] 00:25:21.682 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.947 [2024-06-07 16:33:48.536059] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:21.947 [2024-06-07 16:33:48.536102] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:21.947 [2024-06-07 16:33:48.536107] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:21.947 [2024-06-07 16:33:48.536118] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:21.947 [2024-06-07 16:33:48.536126] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:21.947 [2024-06-07 16:33:48.539436] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:21.947 [2024-06-07 16:33:48.539464] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8cfec0 0 00:25:21.947 [2024-06-07 16:33:48.547412] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:21.947 [2024-06-07 16:33:48.547423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:21.947 [2024-06-07 16:33:48.547427] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:21.947 [2024-06-07 16:33:48.547430] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:21.947 [2024-06-07 16:33:48.547466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.547472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.547476] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.947 [2024-06-07 16:33:48.547489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:21.947 [2024-06-07 16:33:48.547504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.947 [2024-06-07 16:33:48.555413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.947 [2024-06-07 16:33:48.555422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.947 [2024-06-07 16:33:48.555425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555430] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.947 [2024-06-07 16:33:48.555439] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:21.947 [2024-06-07 16:33:48.555446] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:21.947 [2024-06-07 16:33:48.555451] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:21.947 [2024-06-07 16:33:48.555463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.947 [2024-06-07 16:33:48.555478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.947 [2024-06-07 16:33:48.555495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.947 [2024-06-07 16:33:48.555738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.947 [2024-06-07 16:33:48.555744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.947 [2024-06-07 16:33:48.555748] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555752] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.947 [2024-06-07 16:33:48.555757] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:21.947 [2024-06-07 16:33:48.555764] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:21.947 [2024-06-07 16:33:48.555771] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.947 [2024-06-07 16:33:48.555784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.947 [2024-06-07 16:33:48.555795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.947 [2024-06-07 16:33:48.555975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.947 [2024-06-07 16:33:48.555982] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.947 [2024-06-07 16:33:48.555985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.947 [2024-06-07 16:33:48.555989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.947 [2024-06-07 16:33:48.555994] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:21.948 [2024-06-07 16:33:48.556002] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:21.948 [2024-06-07 16:33:48.556008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.556022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.948 [2024-06-07 16:33:48.556031] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.948 [2024-06-07 16:33:48.556212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.948 [2024-06-07 16:33:48.556218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.948 [2024-06-07 16:33:48.556221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556225] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.948 [2024-06-07 16:33:48.556230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:21.948 [2024-06-07 16:33:48.556239] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556246] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.556253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.948 [2024-06-07 16:33:48.556262] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.948 [2024-06-07 16:33:48.556456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.948 [2024-06-07 16:33:48.556465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.948 [2024-06-07 16:33:48.556468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.948 [2024-06-07 16:33:48.556476] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:21.948 [2024-06-07 16:33:48.556481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:21.948 [2024-06-07 16:33:48.556489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:21.948 [2024-06-07 16:33:48.556594] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:21.948 [2024-06-07 16:33:48.556599] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:21.948 [2024-06-07 16:33:48.556607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556615] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.556621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.948 [2024-06-07 16:33:48.556631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.948 [2024-06-07 16:33:48.556846] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.948 [2024-06-07 16:33:48.556853] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.948 [2024-06-07 16:33:48.556856] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.948 [2024-06-07 16:33:48.556864] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:21.948 [2024-06-07 16:33:48.556873] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.556880] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.556887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.948 [2024-06-07 16:33:48.556896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.948 [2024-06-07 16:33:48.557079] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.948 [2024-06-07 16:33:48.557086] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.948 [2024-06-07 16:33:48.557089] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557093] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.948 [2024-06-07 16:33:48.557097] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:21.948 [2024-06-07 16:33:48.557102] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:21.948 [2024-06-07 16:33:48.557109] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:21.948 [2024-06-07 16:33:48.557117] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:21.948 [2024-06-07 16:33:48.557128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557132] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.557138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.948 [2024-06-07 16:33:48.557148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.948 [2024-06-07 16:33:48.557354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:21.948 [2024-06-07 16:33:48.557362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:21.948 [2024-06-07 16:33:48.557365] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557369] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8cfec0): datao=0, datal=4096, cccid=0 00:25:21.948 [2024-06-07 16:33:48.557374] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x952df0) on tqpair(0x8cfec0): expected_datao=0, payload_size=4096 00:25:21.948 [2024-06-07 16:33:48.557378] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557385] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557389] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.948 [2024-06-07 16:33:48.557528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.948 [2024-06-07 16:33:48.557531] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557535] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.948 [2024-06-07 16:33:48.557542] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:21.948 [2024-06-07 16:33:48.557547] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:21.948 [2024-06-07 16:33:48.557551] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:21.948 [2024-06-07 16:33:48.557558] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:21.948 [2024-06-07 16:33:48.557563] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:21.948 [2024-06-07 16:33:48.557568] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:21.948 [2024-06-07 16:33:48.557576] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:21.948 [2024-06-07 16:33:48.557582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.557597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:21.948 [2024-06-07 16:33:48.557607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.948 [2024-06-07 16:33:48.557807] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.948 [2024-06-07 16:33:48.557814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.948 [2024-06-07 16:33:48.557817] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557821] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x952df0) on tqpair=0x8cfec0 00:25:21.948 [2024-06-07 16:33:48.557828] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557831] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557837] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.557843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.948 [2024-06-07 16:33:48.557849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.557862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.948 [2024-06-07 16:33:48.557867] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8cfec0) 00:25:21.948 [2024-06-07 16:33:48.557880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.948 [2024-06-07 16:33:48.557886] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.948 [2024-06-07 16:33:48.557889] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.557893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.557898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.949 [2024-06-07 16:33:48.557903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:21.949 [2024-06-07 16:33:48.557913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:21.949 [2024-06-07 16:33:48.557920] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.557924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.557930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.949 [2024-06-07 16:33:48.557942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952df0, cid 0, qid 0 00:25:21.949 [2024-06-07 16:33:48.557947] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x952f50, cid 1, qid 0 00:25:21.949 [2024-06-07 16:33:48.557952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9530b0, cid 2, qid 0 00:25:21.949 [2024-06-07 16:33:48.557956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.949 [2024-06-07 16:33:48.557961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953370, cid 4, qid 0 00:25:21.949 [2024-06-07 16:33:48.558224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.949 [2024-06-07 16:33:48.558230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.949 [2024-06-07 16:33:48.558234] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.558237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953370) on tqpair=0x8cfec0 00:25:21.949 [2024-06-07 16:33:48.558242] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:21.949 [2024-06-07 16:33:48.558247] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:21.949 [2024-06-07 16:33:48.558257] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.558261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.558267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.949 [2024-06-07 16:33:48.558279] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953370, cid 4, qid 0 00:25:21.949 [2024-06-07 16:33:48.558512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:21.949 [2024-06-07 16:33:48.558519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:21.949 [2024-06-07 16:33:48.558522] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.558526] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8cfec0): datao=0, datal=4096, cccid=4 00:25:21.949 [2024-06-07 16:33:48.558530] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953370) on tqpair(0x8cfec0): expected_datao=0, payload_size=4096 00:25:21.949 [2024-06-07 16:33:48.558534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.558567] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.558571] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.949 [2024-06-07 16:33:48.603420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.949 [2024-06-07 16:33:48.603423] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953370) on tqpair=0x8cfec0 00:25:21.949 [2024-06-07 16:33:48.603440] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:21.949 [2024-06-07 16:33:48.603463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.603474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.949 [2024-06-07 16:33:48.603481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603485] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.603494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.949 [2024-06-07 16:33:48.603512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953370, cid 4, qid 0 00:25:21.949 [2024-06-07 16:33:48.603517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9534d0, cid 5, qid 0 00:25:21.949 [2024-06-07 16:33:48.603779] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:21.949 [2024-06-07 16:33:48.603786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:21.949 [2024-06-07 16:33:48.603789] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603793] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8cfec0): datao=0, datal=1024, cccid=4 00:25:21.949 [2024-06-07 16:33:48.603797] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953370) on tqpair(0x8cfec0): expected_datao=0, payload_size=1024 00:25:21.949 [2024-06-07 16:33:48.603801] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603808] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603811] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.949 [2024-06-07 16:33:48.603823] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.949 [2024-06-07 16:33:48.603826] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.603830] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9534d0) on tqpair=0x8cfec0 00:25:21.949 [2024-06-07 16:33:48.645607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.949 [2024-06-07 16:33:48.645622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.949 [2024-06-07 16:33:48.645626] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.645629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953370) on tqpair=0x8cfec0 00:25:21.949 [2024-06-07 16:33:48.645644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.645648] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.645655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.949 [2024-06-07 16:33:48.645670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953370, cid 4, qid 0 00:25:21.949 [2024-06-07 16:33:48.645936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:21.949 [2024-06-07 16:33:48.645943] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:21.949 [2024-06-07 16:33:48.645946] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.645950] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8cfec0): datao=0, datal=3072, cccid=4 00:25:21.949 [2024-06-07 16:33:48.645954] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953370) on tqpair(0x8cfec0): expected_datao=0, payload_size=3072 00:25:21.949 [2024-06-07 16:33:48.645958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.645965] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.645968] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.646102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.949 [2024-06-07 16:33:48.646108] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.949 [2024-06-07 16:33:48.646111] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.646115] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953370) on tqpair=0x8cfec0 00:25:21.949 [2024-06-07 16:33:48.646123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.646127] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8cfec0) 00:25:21.949 [2024-06-07 16:33:48.646133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.949 [2024-06-07 16:33:48.646146] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953370, cid 4, qid 0 00:25:21.949 [2024-06-07 16:33:48.646356] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:21.949 [2024-06-07 16:33:48.646362] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:21.949 [2024-06-07 16:33:48.646365] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.646369] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8cfec0): datao=0, datal=8, cccid=4 00:25:21.949 [2024-06-07 16:33:48.646373] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x953370) on tqpair(0x8cfec0): expected_datao=0, payload_size=8 00:25:21.949 [2024-06-07 16:33:48.646377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.646384] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.646387] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.691414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.949 [2024-06-07 16:33:48.691424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.949 [2024-06-07 16:33:48.691427] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.949 [2024-06-07 16:33:48.691431] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953370) on tqpair=0x8cfec0 00:25:21.949 ===================================================== 00:25:21.949 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:21.949 ===================================================== 00:25:21.949 Controller Capabilities/Features 00:25:21.949 ================================ 00:25:21.949 Vendor ID: 0000 00:25:21.949 Subsystem Vendor ID: 0000 00:25:21.949 Serial Number: .................... 00:25:21.949 Model Number: ........................................ 00:25:21.949 Firmware Version: 24.09 00:25:21.949 Recommended Arb Burst: 0 00:25:21.949 IEEE OUI Identifier: 00 00 00 00:25:21.949 Multi-path I/O 00:25:21.949 May have multiple subsystem ports: No 00:25:21.949 May have multiple controllers: No 00:25:21.949 Associated with SR-IOV VF: No 00:25:21.949 Max Data Transfer Size: 131072 00:25:21.949 Max Number of Namespaces: 0 00:25:21.949 Max Number of I/O Queues: 1024 00:25:21.950 NVMe Specification Version (VS): 1.3 00:25:21.950 NVMe Specification Version (Identify): 1.3 00:25:21.950 Maximum Queue Entries: 128 00:25:21.950 Contiguous Queues Required: Yes 00:25:21.950 Arbitration Mechanisms Supported 00:25:21.950 Weighted Round Robin: Not Supported 00:25:21.950 Vendor Specific: Not Supported 00:25:21.950 Reset Timeout: 15000 ms 00:25:21.950 Doorbell Stride: 4 bytes 00:25:21.950 NVM Subsystem Reset: Not Supported 00:25:21.950 Command Sets Supported 00:25:21.950 NVM Command Set: Supported 00:25:21.950 Boot Partition: Not Supported 00:25:21.950 Memory Page Size Minimum: 4096 bytes 00:25:21.950 Memory Page Size Maximum: 4096 bytes 00:25:21.950 Persistent Memory Region: Not Supported 00:25:21.950 Optional Asynchronous Events Supported 00:25:21.950 Namespace Attribute Notices: Not Supported 00:25:21.950 Firmware Activation Notices: Not Supported 00:25:21.950 ANA Change Notices: Not Supported 00:25:21.950 PLE Aggregate Log Change Notices: Not Supported 00:25:21.950 LBA Status Info Alert Notices: Not Supported 00:25:21.950 EGE Aggregate Log Change Notices: Not Supported 00:25:21.950 Normal NVM Subsystem Shutdown event: Not Supported 00:25:21.950 Zone Descriptor Change Notices: Not Supported 00:25:21.950 Discovery Log Change Notices: Supported 00:25:21.950 Controller Attributes 00:25:21.950 128-bit Host Identifier: Not Supported 00:25:21.950 Non-Operational Permissive Mode: Not Supported 00:25:21.950 NVM Sets: Not Supported 00:25:21.950 Read Recovery Levels: Not Supported 00:25:21.950 Endurance Groups: Not Supported 00:25:21.950 Predictable Latency Mode: Not Supported 00:25:21.950 Traffic Based Keep ALive: Not Supported 00:25:21.950 Namespace Granularity: Not Supported 00:25:21.950 SQ Associations: Not Supported 00:25:21.950 UUID List: Not Supported 00:25:21.950 Multi-Domain Subsystem: Not Supported 00:25:21.950 Fixed Capacity Management: Not Supported 00:25:21.950 Variable Capacity Management: Not Supported 00:25:21.950 Delete Endurance Group: Not Supported 00:25:21.950 Delete NVM Set: Not Supported 00:25:21.950 Extended LBA Formats Supported: Not Supported 00:25:21.950 Flexible Data Placement Supported: Not Supported 00:25:21.950 00:25:21.950 Controller Memory Buffer Support 00:25:21.950 ================================ 00:25:21.950 Supported: No 00:25:21.950 00:25:21.950 Persistent Memory Region Support 00:25:21.950 ================================ 00:25:21.950 Supported: No 00:25:21.950 00:25:21.950 Admin Command Set Attributes 00:25:21.950 ============================ 00:25:21.950 Security Send/Receive: Not Supported 00:25:21.950 Format NVM: Not Supported 00:25:21.950 Firmware Activate/Download: Not Supported 00:25:21.950 Namespace Management: Not Supported 00:25:21.950 Device Self-Test: Not Supported 00:25:21.950 Directives: Not Supported 00:25:21.950 NVMe-MI: Not Supported 00:25:21.950 Virtualization Management: Not Supported 00:25:21.950 Doorbell Buffer Config: Not Supported 00:25:21.950 Get LBA Status Capability: Not Supported 00:25:21.950 Command & Feature Lockdown Capability: Not Supported 00:25:21.950 Abort Command Limit: 1 00:25:21.950 Async Event Request Limit: 4 00:25:21.950 Number of Firmware Slots: N/A 00:25:21.950 Firmware Slot 1 Read-Only: N/A 00:25:21.950 Firmware Activation Without Reset: N/A 00:25:21.950 Multiple Update Detection Support: N/A 00:25:21.950 Firmware Update Granularity: No Information Provided 00:25:21.950 Per-Namespace SMART Log: No 00:25:21.950 Asymmetric Namespace Access Log Page: Not Supported 00:25:21.950 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:21.950 Command Effects Log Page: Not Supported 00:25:21.950 Get Log Page Extended Data: Supported 00:25:21.950 Telemetry Log Pages: Not Supported 00:25:21.950 Persistent Event Log Pages: Not Supported 00:25:21.950 Supported Log Pages Log Page: May Support 00:25:21.950 Commands Supported & Effects Log Page: Not Supported 00:25:21.950 Feature Identifiers & Effects Log Page:May Support 00:25:21.950 NVMe-MI Commands & Effects Log Page: May Support 00:25:21.950 Data Area 4 for Telemetry Log: Not Supported 00:25:21.950 Error Log Page Entries Supported: 128 00:25:21.950 Keep Alive: Not Supported 00:25:21.950 00:25:21.950 NVM Command Set Attributes 00:25:21.950 ========================== 00:25:21.950 Submission Queue Entry Size 00:25:21.950 Max: 1 00:25:21.950 Min: 1 00:25:21.950 Completion Queue Entry Size 00:25:21.950 Max: 1 00:25:21.950 Min: 1 00:25:21.950 Number of Namespaces: 0 00:25:21.950 Compare Command: Not Supported 00:25:21.950 Write Uncorrectable Command: Not Supported 00:25:21.950 Dataset Management Command: Not Supported 00:25:21.950 Write Zeroes Command: Not Supported 00:25:21.950 Set Features Save Field: Not Supported 00:25:21.950 Reservations: Not Supported 00:25:21.950 Timestamp: Not Supported 00:25:21.950 Copy: Not Supported 00:25:21.950 Volatile Write Cache: Not Present 00:25:21.950 Atomic Write Unit (Normal): 1 00:25:21.950 Atomic Write Unit (PFail): 1 00:25:21.950 Atomic Compare & Write Unit: 1 00:25:21.950 Fused Compare & Write: Supported 00:25:21.950 Scatter-Gather List 00:25:21.950 SGL Command Set: Supported 00:25:21.950 SGL Keyed: Supported 00:25:21.950 SGL Bit Bucket Descriptor: Not Supported 00:25:21.950 SGL Metadata Pointer: Not Supported 00:25:21.950 Oversized SGL: Not Supported 00:25:21.950 SGL Metadata Address: Not Supported 00:25:21.950 SGL Offset: Supported 00:25:21.950 Transport SGL Data Block: Not Supported 00:25:21.950 Replay Protected Memory Block: Not Supported 00:25:21.950 00:25:21.950 Firmware Slot Information 00:25:21.950 ========================= 00:25:21.950 Active slot: 0 00:25:21.950 00:25:21.950 00:25:21.950 Error Log 00:25:21.950 ========= 00:25:21.950 00:25:21.950 Active Namespaces 00:25:21.950 ================= 00:25:21.950 Discovery Log Page 00:25:21.950 ================== 00:25:21.950 Generation Counter: 2 00:25:21.950 Number of Records: 2 00:25:21.950 Record Format: 0 00:25:21.950 00:25:21.950 Discovery Log Entry 0 00:25:21.950 ---------------------- 00:25:21.950 Transport Type: 3 (TCP) 00:25:21.950 Address Family: 1 (IPv4) 00:25:21.950 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:21.950 Entry Flags: 00:25:21.950 Duplicate Returned Information: 1 00:25:21.950 Explicit Persistent Connection Support for Discovery: 1 00:25:21.950 Transport Requirements: 00:25:21.950 Secure Channel: Not Required 00:25:21.950 Port ID: 0 (0x0000) 00:25:21.950 Controller ID: 65535 (0xffff) 00:25:21.950 Admin Max SQ Size: 128 00:25:21.950 Transport Service Identifier: 4420 00:25:21.950 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:21.950 Transport Address: 10.0.0.2 00:25:21.950 Discovery Log Entry 1 00:25:21.950 ---------------------- 00:25:21.950 Transport Type: 3 (TCP) 00:25:21.950 Address Family: 1 (IPv4) 00:25:21.950 Subsystem Type: 2 (NVM Subsystem) 00:25:21.950 Entry Flags: 00:25:21.950 Duplicate Returned Information: 0 00:25:21.950 Explicit Persistent Connection Support for Discovery: 0 00:25:21.950 Transport Requirements: 00:25:21.950 Secure Channel: Not Required 00:25:21.950 Port ID: 0 (0x0000) 00:25:21.950 Controller ID: 65535 (0xffff) 00:25:21.950 Admin Max SQ Size: 128 00:25:21.950 Transport Service Identifier: 4420 00:25:21.950 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:21.950 Transport Address: 10.0.0.2 [2024-06-07 16:33:48.691610] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:21.950 [2024-06-07 16:33:48.691625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.950 [2024-06-07 16:33:48.691632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.950 [2024-06-07 16:33:48.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.950 [2024-06-07 16:33:48.691644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.950 [2024-06-07 16:33:48.691652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.950 [2024-06-07 16:33:48.691656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.950 [2024-06-07 16:33:48.691659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.950 [2024-06-07 16:33:48.691667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.950 [2024-06-07 16:33:48.691680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.950 [2024-06-07 16:33:48.691880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.950 [2024-06-07 16:33:48.691887] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.691890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.691894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.691903] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.691907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.691910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.691917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.691929] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.692114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.692120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.692124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692127] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.692132] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:21.951 [2024-06-07 16:33:48.692137] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:21.951 [2024-06-07 16:33:48.692146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692153] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.692160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.692169] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.692342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.692348] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.692352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692356] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.692365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692369] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.692382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.692391] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.692566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.692573] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.692577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692580] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.692590] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692594] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.692604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.692613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.692796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.692802] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.692806] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692809] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.692819] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.692826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.692833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.692842] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.693033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.693039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.693043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.693056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.693069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.693079] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.693254] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.693261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.693264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.693277] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693281] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693285] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.693293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.693303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.693520] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.693527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.693530] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693534] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.693543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693547] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.693557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.693566] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.693737] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.693744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.693747] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.693760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693767] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.693774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.693783] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.693963] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.693969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.693973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.693986] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.693993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.693999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.694009] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.694276] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.694282] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.694286] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.694289] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.694299] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.694303] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.694306] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.694313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.951 [2024-06-07 16:33:48.694327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.951 [2024-06-07 16:33:48.694533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.951 [2024-06-07 16:33:48.694539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.951 [2024-06-07 16:33:48.694543] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.694546] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.951 [2024-06-07 16:33:48.694555] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.694559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.951 [2024-06-07 16:33:48.694563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.951 [2024-06-07 16:33:48.694569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.952 [2024-06-07 16:33:48.694579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.952 [2024-06-07 16:33:48.694753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.694759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.694763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.694766] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.952 [2024-06-07 16:33:48.694776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.694779] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.694783] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.952 [2024-06-07 16:33:48.694790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.952 [2024-06-07 16:33:48.694799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.952 [2024-06-07 16:33:48.694973] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.694980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.694983] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.694987] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.952 [2024-06-07 16:33:48.694996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.695000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.695003] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.952 [2024-06-07 16:33:48.695010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.952 [2024-06-07 16:33:48.695019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.952 [2024-06-07 16:33:48.695196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.695203] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.695206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.695210] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.952 [2024-06-07 16:33:48.695219] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.695223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.695227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.952 [2024-06-07 16:33:48.695233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.952 [2024-06-07 16:33:48.695245] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.952 [2024-06-07 16:33:48.699409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.699418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.699422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.699425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.952 [2024-06-07 16:33:48.699436] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.699440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.699444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8cfec0) 00:25:21.952 [2024-06-07 16:33:48.699450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.952 [2024-06-07 16:33:48.699461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x953210, cid 3, qid 0 00:25:21.952 [2024-06-07 16:33:48.699714] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.699720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.699723] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.699727] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x953210) on tqpair=0x8cfec0 00:25:21.952 [2024-06-07 16:33:48.699734] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:21.952 00:25:21.952 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:21.952 [2024-06-07 16:33:48.741126] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:25:21.952 [2024-06-07 16:33:48.741167] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207699 ] 00:25:21.952 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.952 [2024-06-07 16:33:48.773930] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:21.952 [2024-06-07 16:33:48.773974] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:21.952 [2024-06-07 16:33:48.773979] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:21.952 [2024-06-07 16:33:48.773990] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:21.952 [2024-06-07 16:33:48.773998] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:21.952 [2024-06-07 16:33:48.777436] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:21.952 [2024-06-07 16:33:48.777464] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x158bec0 0 00:25:21.952 [2024-06-07 16:33:48.785411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:21.952 [2024-06-07 16:33:48.785421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:21.952 [2024-06-07 16:33:48.785425] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:21.952 [2024-06-07 16:33:48.785429] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:21.952 [2024-06-07 16:33:48.785458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.785464] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.785468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.952 [2024-06-07 16:33:48.785482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:21.952 [2024-06-07 16:33:48.785497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.952 [2024-06-07 16:33:48.792411] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.792421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.792425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.792429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.952 [2024-06-07 16:33:48.792439] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:21.952 [2024-06-07 16:33:48.792445] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:21.952 [2024-06-07 16:33:48.792450] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:21.952 [2024-06-07 16:33:48.792462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.792466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.792469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.952 [2024-06-07 16:33:48.792477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.952 [2024-06-07 16:33:48.792490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.952 [2024-06-07 16:33:48.792690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.952 [2024-06-07 16:33:48.792697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.952 [2024-06-07 16:33:48.792700] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.952 [2024-06-07 16:33:48.792704] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.953 [2024-06-07 16:33:48.792710] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:21.953 [2024-06-07 16:33:48.792717] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:21.953 [2024-06-07 16:33:48.792723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.792727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.792731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.953 [2024-06-07 16:33:48.792737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.953 [2024-06-07 16:33:48.792748] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.953 [2024-06-07 16:33:48.792956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.953 [2024-06-07 16:33:48.792962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.953 [2024-06-07 16:33:48.792966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.792969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.953 [2024-06-07 16:33:48.792975] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:21.953 [2024-06-07 16:33:48.792983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:21.953 [2024-06-07 16:33:48.792989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.792993] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.792996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.953 [2024-06-07 16:33:48.793005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.953 [2024-06-07 16:33:48.793016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.953 [2024-06-07 16:33:48.793230] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.953 [2024-06-07 16:33:48.793237] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.953 [2024-06-07 16:33:48.793240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.953 [2024-06-07 16:33:48.793249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:21.953 [2024-06-07 16:33:48.793259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793266] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.953 [2024-06-07 16:33:48.793273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.953 [2024-06-07 16:33:48.793282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.953 [2024-06-07 16:33:48.793467] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.953 [2024-06-07 16:33:48.793473] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.953 [2024-06-07 16:33:48.793477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793480] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.953 [2024-06-07 16:33:48.793485] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:21.953 [2024-06-07 16:33:48.793491] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:21.953 [2024-06-07 16:33:48.793498] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:21.953 [2024-06-07 16:33:48.793603] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:21.953 [2024-06-07 16:33:48.793607] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:21.953 [2024-06-07 16:33:48.793614] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793622] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.953 [2024-06-07 16:33:48.793628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.953 [2024-06-07 16:33:48.793638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.953 [2024-06-07 16:33:48.793821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.953 [2024-06-07 16:33:48.793828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.953 [2024-06-07 16:33:48.793831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793835] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.953 [2024-06-07 16:33:48.793840] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:21.953 [2024-06-07 16:33:48.793849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.793856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.953 [2024-06-07 16:33:48.793865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.953 [2024-06-07 16:33:48.793875] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.953 [2024-06-07 16:33:48.794072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:21.953 [2024-06-07 16:33:48.794079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:21.953 [2024-06-07 16:33:48.794082] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.794086] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:21.953 [2024-06-07 16:33:48.794091] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:21.953 [2024-06-07 16:33:48.794095] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:21.953 [2024-06-07 16:33:48.794102] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:21.953 [2024-06-07 16:33:48.794110] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:21.953 [2024-06-07 16:33:48.794119] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.794122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:21.953 [2024-06-07 16:33:48.794129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.953 [2024-06-07 16:33:48.794139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:21.953 [2024-06-07 16:33:48.794329] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:21.953 [2024-06-07 16:33:48.794336] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:21.953 [2024-06-07 16:33:48.794339] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.794343] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=4096, cccid=0 00:25:21.953 [2024-06-07 16:33:48.794348] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160edf0) on tqpair(0x158bec0): expected_datao=0, payload_size=4096 00:25:21.953 [2024-06-07 16:33:48.794352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.794386] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:21.953 [2024-06-07 16:33:48.794391] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835604] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.217 [2024-06-07 16:33:48.835616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.217 [2024-06-07 16:33:48.835620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835624] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:22.217 [2024-06-07 16:33:48.835633] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:22.217 [2024-06-07 16:33:48.835638] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:22.217 [2024-06-07 16:33:48.835642] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:22.217 [2024-06-07 16:33:48.835650] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:22.217 [2024-06-07 16:33:48.835655] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:22.217 [2024-06-07 16:33:48.835659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:22.217 [2024-06-07 16:33:48.835670] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:22.217 [2024-06-07 16:33:48.835677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:22.217 [2024-06-07 16:33:48.835693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:22.217 [2024-06-07 16:33:48.835704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:22.217 [2024-06-07 16:33:48.835864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.217 [2024-06-07 16:33:48.835871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.217 [2024-06-07 16:33:48.835874] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835878] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160edf0) on tqpair=0x158bec0 00:25:22.217 [2024-06-07 16:33:48.835885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835889] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.217 [2024-06-07 16:33:48.835892] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x158bec0) 00:25:22.217 [2024-06-07 16:33:48.835899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.218 [2024-06-07 16:33:48.835905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.835918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.218 [2024-06-07 16:33:48.835924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835928] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.835937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.218 [2024-06-07 16:33:48.835943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.835956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.218 [2024-06-07 16:33:48.835961] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.835971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.835978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.835981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.835988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.218 [2024-06-07 16:33:48.836000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160edf0, cid 0, qid 0 00:25:22.218 [2024-06-07 16:33:48.836005] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160ef50, cid 1, qid 0 00:25:22.218 [2024-06-07 16:33:48.836010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f0b0, cid 2, qid 0 00:25:22.218 [2024-06-07 16:33:48.836016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.218 [2024-06-07 16:33:48.836021] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.218 [2024-06-07 16:33:48.836252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.218 [2024-06-07 16:33:48.836259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.218 [2024-06-07 16:33:48.836262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.836266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.218 [2024-06-07 16:33:48.836271] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:22.218 [2024-06-07 16:33:48.836276] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.836284] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.836290] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.836296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.836300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.836304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.836310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:22.218 [2024-06-07 16:33:48.836320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.218 [2024-06-07 16:33:48.840410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.218 [2024-06-07 16:33:48.840418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.218 [2024-06-07 16:33:48.840422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.218 [2024-06-07 16:33:48.840479] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.840489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.840496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.840507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.218 [2024-06-07 16:33:48.840518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.218 [2024-06-07 16:33:48.840704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.218 [2024-06-07 16:33:48.840711] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.218 [2024-06-07 16:33:48.840715] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840718] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=4096, cccid=4 00:25:22.218 [2024-06-07 16:33:48.840723] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f370) on tqpair(0x158bec0): expected_datao=0, payload_size=4096 00:25:22.218 [2024-06-07 16:33:48.840727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840734] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840742] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840891] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.218 [2024-06-07 16:33:48.840898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.218 [2024-06-07 16:33:48.840902] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840905] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.218 [2024-06-07 16:33:48.840915] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:22.218 [2024-06-07 16:33:48.840928] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.840937] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.840944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.840948] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.840954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.218 [2024-06-07 16:33:48.840965] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.218 [2024-06-07 16:33:48.841139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.218 [2024-06-07 16:33:48.841146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.218 [2024-06-07 16:33:48.841149] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841153] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=4096, cccid=4 00:25:22.218 [2024-06-07 16:33:48.841157] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f370) on tqpair(0x158bec0): expected_datao=0, payload_size=4096 00:25:22.218 [2024-06-07 16:33:48.841161] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841196] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841200] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841373] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.218 [2024-06-07 16:33:48.841380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.218 [2024-06-07 16:33:48.841383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.218 [2024-06-07 16:33:48.841400] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.841414] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:22.218 [2024-06-07 16:33:48.841421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841425] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.218 [2024-06-07 16:33:48.841431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.218 [2024-06-07 16:33:48.841442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.218 [2024-06-07 16:33:48.841626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.218 [2024-06-07 16:33:48.841633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.218 [2024-06-07 16:33:48.841636] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841639] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=4096, cccid=4 00:25:22.218 [2024-06-07 16:33:48.841644] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f370) on tqpair(0x158bec0): expected_datao=0, payload_size=4096 00:25:22.218 [2024-06-07 16:33:48.841650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841657] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841661] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841824] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.218 [2024-06-07 16:33:48.841831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.218 [2024-06-07 16:33:48.841834] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.218 [2024-06-07 16:33:48.841838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.218 [2024-06-07 16:33:48.841845] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:22.219 [2024-06-07 16:33:48.841853] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:22.219 [2024-06-07 16:33:48.841861] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:22.219 [2024-06-07 16:33:48.841867] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:22.219 [2024-06-07 16:33:48.841872] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:22.219 [2024-06-07 16:33:48.841877] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:22.219 [2024-06-07 16:33:48.841882] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:22.219 [2024-06-07 16:33:48.841887] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:22.219 [2024-06-07 16:33:48.841902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.841907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.841914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.841920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.841924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.841928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.841934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.219 [2024-06-07 16:33:48.841946] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.219 [2024-06-07 16:33:48.841951] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f4d0, cid 5, qid 0 00:25:22.219 [2024-06-07 16:33:48.842134] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.219 [2024-06-07 16:33:48.842140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.219 [2024-06-07 16:33:48.842144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.219 [2024-06-07 16:33:48.842155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.219 [2024-06-07 16:33:48.842161] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.219 [2024-06-07 16:33:48.842165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f4d0) on tqpair=0x158bec0 00:25:22.219 [2024-06-07 16:33:48.842178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842201] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f4d0, cid 5, qid 0 00:25:22.219 [2024-06-07 16:33:48.842384] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.219 [2024-06-07 16:33:48.842390] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.219 [2024-06-07 16:33:48.842394] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f4d0) on tqpair=0x158bec0 00:25:22.219 [2024-06-07 16:33:48.842412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f4d0, cid 5, qid 0 00:25:22.219 [2024-06-07 16:33:48.842603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.219 [2024-06-07 16:33:48.842609] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.219 [2024-06-07 16:33:48.842612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f4d0) on tqpair=0x158bec0 00:25:22.219 [2024-06-07 16:33:48.842626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f4d0, cid 5, qid 0 00:25:22.219 [2024-06-07 16:33:48.842870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.219 [2024-06-07 16:33:48.842877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.219 [2024-06-07 16:33:48.842880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f4d0) on tqpair=0x158bec0 00:25:22.219 [2024-06-07 16:33:48.842896] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842900] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842917] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842934] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.842954] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x158bec0) 00:25:22.219 [2024-06-07 16:33:48.842960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.219 [2024-06-07 16:33:48.842972] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f4d0, cid 5, qid 0 00:25:22.219 [2024-06-07 16:33:48.842977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f370, cid 4, qid 0 00:25:22.219 [2024-06-07 16:33:48.842982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f630, cid 6, qid 0 00:25:22.219 [2024-06-07 16:33:48.842987] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f790, cid 7, qid 0 00:25:22.219 [2024-06-07 16:33:48.843219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.219 [2024-06-07 16:33:48.843226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.219 [2024-06-07 16:33:48.843229] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843233] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=8192, cccid=5 00:25:22.219 [2024-06-07 16:33:48.843237] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f4d0) on tqpair(0x158bec0): expected_datao=0, payload_size=8192 00:25:22.219 [2024-06-07 16:33:48.843242] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843333] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843338] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.219 [2024-06-07 16:33:48.843349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.219 [2024-06-07 16:33:48.843352] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843356] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=512, cccid=4 00:25:22.219 [2024-06-07 16:33:48.843360] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f370) on tqpair(0x158bec0): expected_datao=0, payload_size=512 00:25:22.219 [2024-06-07 16:33:48.843365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843371] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843375] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843381] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.219 [2024-06-07 16:33:48.843386] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.219 [2024-06-07 16:33:48.843390] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843393] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=512, cccid=6 00:25:22.219 [2024-06-07 16:33:48.843397] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f630) on tqpair(0x158bec0): expected_datao=0, payload_size=512 00:25:22.219 [2024-06-07 16:33:48.843406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843413] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843416] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.219 [2024-06-07 16:33:48.843428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.219 [2024-06-07 16:33:48.843431] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843435] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x158bec0): datao=0, datal=4096, cccid=7 00:25:22.219 [2024-06-07 16:33:48.843439] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x160f790) on tqpair(0x158bec0): expected_datao=0, payload_size=4096 00:25:22.219 [2024-06-07 16:33:48.843443] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843456] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843460] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.219 [2024-06-07 16:33:48.843714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.219 [2024-06-07 16:33:48.843717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.219 [2024-06-07 16:33:48.843721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f4d0) on tqpair=0x158bec0 00:25:22.219 [2024-06-07 16:33:48.843734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.220 [2024-06-07 16:33:48.843740] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.220 [2024-06-07 16:33:48.843744] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.220 [2024-06-07 16:33:48.843748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f370) on tqpair=0x158bec0 00:25:22.220 [2024-06-07 16:33:48.843757] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.220 [2024-06-07 16:33:48.843763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.220 [2024-06-07 16:33:48.843767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.220 [2024-06-07 16:33:48.843771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f630) on tqpair=0x158bec0 00:25:22.220 [2024-06-07 16:33:48.843780] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.220 [2024-06-07 16:33:48.843786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.220 [2024-06-07 16:33:48.843790] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.220 [2024-06-07 16:33:48.843794] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f790) on tqpair=0x158bec0 00:25:22.220 ===================================================== 00:25:22.220 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:22.220 ===================================================== 00:25:22.220 Controller Capabilities/Features 00:25:22.220 ================================ 00:25:22.220 Vendor ID: 8086 00:25:22.220 Subsystem Vendor ID: 8086 00:25:22.220 Serial Number: SPDK00000000000001 00:25:22.220 Model Number: SPDK bdev Controller 00:25:22.220 Firmware Version: 24.09 00:25:22.220 Recommended Arb Burst: 6 00:25:22.220 IEEE OUI Identifier: e4 d2 5c 00:25:22.220 Multi-path I/O 00:25:22.220 May have multiple subsystem ports: Yes 00:25:22.220 May have multiple controllers: Yes 00:25:22.220 Associated with SR-IOV VF: No 00:25:22.220 Max Data Transfer Size: 131072 00:25:22.220 Max Number of Namespaces: 32 00:25:22.220 Max Number of I/O Queues: 127 00:25:22.220 NVMe Specification Version (VS): 1.3 00:25:22.220 NVMe Specification Version (Identify): 1.3 00:25:22.220 Maximum Queue Entries: 128 00:25:22.220 Contiguous Queues Required: Yes 00:25:22.220 Arbitration Mechanisms Supported 00:25:22.220 Weighted Round Robin: Not Supported 00:25:22.220 Vendor Specific: Not Supported 00:25:22.220 Reset Timeout: 15000 ms 00:25:22.220 Doorbell Stride: 4 bytes 00:25:22.220 NVM Subsystem Reset: Not Supported 00:25:22.220 Command Sets Supported 00:25:22.220 NVM Command Set: Supported 00:25:22.220 Boot Partition: Not Supported 00:25:22.220 Memory Page Size Minimum: 4096 bytes 00:25:22.220 Memory Page Size Maximum: 4096 bytes 00:25:22.220 Persistent Memory Region: Not Supported 00:25:22.220 Optional Asynchronous Events Supported 00:25:22.220 Namespace Attribute Notices: Supported 00:25:22.220 Firmware Activation Notices: Not Supported 00:25:22.220 ANA Change Notices: Not Supported 00:25:22.220 PLE Aggregate Log Change Notices: Not Supported 00:25:22.220 LBA Status Info Alert Notices: Not Supported 00:25:22.220 EGE Aggregate Log Change Notices: Not Supported 00:25:22.220 Normal NVM Subsystem Shutdown event: Not Supported 00:25:22.220 Zone Descriptor Change Notices: Not Supported 00:25:22.220 Discovery Log Change Notices: Not Supported 00:25:22.220 Controller Attributes 00:25:22.220 128-bit Host Identifier: Supported 00:25:22.220 Non-Operational Permissive Mode: Not Supported 00:25:22.220 NVM Sets: Not Supported 00:25:22.220 Read Recovery Levels: Not Supported 00:25:22.220 Endurance Groups: Not Supported 00:25:22.220 Predictable Latency Mode: Not Supported 00:25:22.220 Traffic Based Keep ALive: Not Supported 00:25:22.220 Namespace Granularity: Not Supported 00:25:22.220 SQ Associations: Not Supported 00:25:22.220 UUID List: Not Supported 00:25:22.220 Multi-Domain Subsystem: Not Supported 00:25:22.220 Fixed Capacity Management: Not Supported 00:25:22.220 Variable Capacity Management: Not Supported 00:25:22.220 Delete Endurance Group: Not Supported 00:25:22.220 Delete NVM Set: Not Supported 00:25:22.220 Extended LBA Formats Supported: Not Supported 00:25:22.220 Flexible Data Placement Supported: Not Supported 00:25:22.220 00:25:22.220 Controller Memory Buffer Support 00:25:22.220 ================================ 00:25:22.220 Supported: No 00:25:22.220 00:25:22.220 Persistent Memory Region Support 00:25:22.220 ================================ 00:25:22.220 Supported: No 00:25:22.220 00:25:22.220 Admin Command Set Attributes 00:25:22.220 ============================ 00:25:22.220 Security Send/Receive: Not Supported 00:25:22.220 Format NVM: Not Supported 00:25:22.220 Firmware Activate/Download: Not Supported 00:25:22.220 Namespace Management: Not Supported 00:25:22.220 Device Self-Test: Not Supported 00:25:22.220 Directives: Not Supported 00:25:22.220 NVMe-MI: Not Supported 00:25:22.220 Virtualization Management: Not Supported 00:25:22.220 Doorbell Buffer Config: Not Supported 00:25:22.220 Get LBA Status Capability: Not Supported 00:25:22.220 Command & Feature Lockdown Capability: Not Supported 00:25:22.220 Abort Command Limit: 4 00:25:22.220 Async Event Request Limit: 4 00:25:22.220 Number of Firmware Slots: N/A 00:25:22.220 Firmware Slot 1 Read-Only: N/A 00:25:22.220 Firmware Activation Without Reset: N/A 00:25:22.220 Multiple Update Detection Support: N/A 00:25:22.220 Firmware Update Granularity: No Information Provided 00:25:22.220 Per-Namespace SMART Log: No 00:25:22.220 Asymmetric Namespace Access Log Page: Not Supported 00:25:22.220 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:22.220 Command Effects Log Page: Supported 00:25:22.220 Get Log Page Extended Data: Supported 00:25:22.220 Telemetry Log Pages: Not Supported 00:25:22.220 Persistent Event Log Pages: Not Supported 00:25:22.220 Supported Log Pages Log Page: May Support 00:25:22.220 Commands Supported & Effects Log Page: Not Supported 00:25:22.220 Feature Identifiers & Effects Log Page:May Support 00:25:22.220 NVMe-MI Commands & Effects Log Page: May Support 00:25:22.220 Data Area 4 for Telemetry Log: Not Supported 00:25:22.220 Error Log Page Entries Supported: 128 00:25:22.220 Keep Alive: Supported 00:25:22.220 Keep Alive Granularity: 10000 ms 00:25:22.220 00:25:22.220 NVM Command Set Attributes 00:25:22.220 ========================== 00:25:22.220 Submission Queue Entry Size 00:25:22.220 Max: 64 00:25:22.220 Min: 64 00:25:22.220 Completion Queue Entry Size 00:25:22.220 Max: 16 00:25:22.220 Min: 16 00:25:22.220 Number of Namespaces: 32 00:25:22.220 Compare Command: Supported 00:25:22.220 Write Uncorrectable Command: Not Supported 00:25:22.220 Dataset Management Command: Supported 00:25:22.220 Write Zeroes Command: Supported 00:25:22.220 Set Features Save Field: Not Supported 00:25:22.220 Reservations: Supported 00:25:22.220 Timestamp: Not Supported 00:25:22.220 Copy: Supported 00:25:22.220 Volatile Write Cache: Present 00:25:22.220 Atomic Write Unit (Normal): 1 00:25:22.220 Atomic Write Unit (PFail): 1 00:25:22.220 Atomic Compare & Write Unit: 1 00:25:22.220 Fused Compare & Write: Supported 00:25:22.220 Scatter-Gather List 00:25:22.220 SGL Command Set: Supported 00:25:22.220 SGL Keyed: Supported 00:25:22.220 SGL Bit Bucket Descriptor: Not Supported 00:25:22.220 SGL Metadata Pointer: Not Supported 00:25:22.220 Oversized SGL: Not Supported 00:25:22.220 SGL Metadata Address: Not Supported 00:25:22.220 SGL Offset: Supported 00:25:22.220 Transport SGL Data Block: Not Supported 00:25:22.220 Replay Protected Memory Block: Not Supported 00:25:22.220 00:25:22.220 Firmware Slot Information 00:25:22.220 ========================= 00:25:22.220 Active slot: 1 00:25:22.220 Slot 1 Firmware Revision: 24.09 00:25:22.220 00:25:22.220 00:25:22.220 Commands Supported and Effects 00:25:22.220 ============================== 00:25:22.220 Admin Commands 00:25:22.220 -------------- 00:25:22.220 Get Log Page (02h): Supported 00:25:22.220 Identify (06h): Supported 00:25:22.220 Abort (08h): Supported 00:25:22.220 Set Features (09h): Supported 00:25:22.220 Get Features (0Ah): Supported 00:25:22.220 Asynchronous Event Request (0Ch): Supported 00:25:22.220 Keep Alive (18h): Supported 00:25:22.220 I/O Commands 00:25:22.220 ------------ 00:25:22.220 Flush (00h): Supported LBA-Change 00:25:22.220 Write (01h): Supported LBA-Change 00:25:22.220 Read (02h): Supported 00:25:22.220 Compare (05h): Supported 00:25:22.220 Write Zeroes (08h): Supported LBA-Change 00:25:22.220 Dataset Management (09h): Supported LBA-Change 00:25:22.220 Copy (19h): Supported LBA-Change 00:25:22.220 Unknown (79h): Supported LBA-Change 00:25:22.220 Unknown (7Ah): Supported 00:25:22.220 00:25:22.220 Error Log 00:25:22.220 ========= 00:25:22.220 00:25:22.220 Arbitration 00:25:22.220 =========== 00:25:22.220 Arbitration Burst: 1 00:25:22.220 00:25:22.220 Power Management 00:25:22.220 ================ 00:25:22.220 Number of Power States: 1 00:25:22.220 Current Power State: Power State #0 00:25:22.220 Power State #0: 00:25:22.220 Max Power: 0.00 W 00:25:22.221 Non-Operational State: Operational 00:25:22.221 Entry Latency: Not Reported 00:25:22.221 Exit Latency: Not Reported 00:25:22.221 Relative Read Throughput: 0 00:25:22.221 Relative Read Latency: 0 00:25:22.221 Relative Write Throughput: 0 00:25:22.221 Relative Write Latency: 0 00:25:22.221 Idle Power: Not Reported 00:25:22.221 Active Power: Not Reported 00:25:22.221 Non-Operational Permissive Mode: Not Supported 00:25:22.221 00:25:22.221 Health Information 00:25:22.221 ================== 00:25:22.221 Critical Warnings: 00:25:22.221 Available Spare Space: OK 00:25:22.221 Temperature: OK 00:25:22.221 Device Reliability: OK 00:25:22.221 Read Only: No 00:25:22.221 Volatile Memory Backup: OK 00:25:22.221 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:22.221 Temperature Threshold: [2024-06-07 16:33:48.843894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.843899] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.843906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.843917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f790, cid 7, qid 0 00:25:22.221 [2024-06-07 16:33:48.844134] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.844141] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.844144] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.844148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f790) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.844175] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:22.221 [2024-06-07 16:33:48.844186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.221 [2024-06-07 16:33:48.844193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.221 [2024-06-07 16:33:48.844199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.221 [2024-06-07 16:33:48.844205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.221 [2024-06-07 16:33:48.844213] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.844218] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.844221] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.844229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.844240] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.848449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.848459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.848463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.848475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848479] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.848489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.848505] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.848709] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.848715] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.848719] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.848728] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:22.221 [2024-06-07 16:33:48.848732] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:22.221 [2024-06-07 16:33:48.848742] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848749] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.848756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.848766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.848936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.848943] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.848946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848950] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.848960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.848968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.848974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.848984] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.849210] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.849217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.849220] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.849234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.849248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.849261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.849480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.849487] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.849491] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.849505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849509] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849513] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.849520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.849530] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.849730] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.849736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.849739] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849743] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.849753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849758] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.849761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.849768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.849777] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.850037] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.850043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.850047] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.850050] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.850061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.850065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.850069] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.850076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.850086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.221 [2024-06-07 16:33:48.850285] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.221 [2024-06-07 16:33:48.850291] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.221 [2024-06-07 16:33:48.850295] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.850299] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.221 [2024-06-07 16:33:48.850309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.850313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.221 [2024-06-07 16:33:48.850316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.221 [2024-06-07 16:33:48.850323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.221 [2024-06-07 16:33:48.850335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.850513] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.850520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.850523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.850527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.850537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.850542] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.850545] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.850552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.850562] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.850811] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.850817] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.850820] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.850824] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.850834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.850838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.850842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.850848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.850858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.851048] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.851054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.851058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.851072] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851080] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.851086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.851096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.851353] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.851359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.851362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851366] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.851376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.851390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.851400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.851598] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.851604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.851608] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851612] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.851622] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.851636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.851646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.851822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.851828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.851832] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851835] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.851846] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851850] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.851853] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.851860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.851870] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.855410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.855419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.855422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.855426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.855436] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.855440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.855444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x158bec0) 00:25:22.222 [2024-06-07 16:33:48.855450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.222 [2024-06-07 16:33:48.855462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x160f210, cid 3, qid 0 00:25:22.222 [2024-06-07 16:33:48.855664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.222 [2024-06-07 16:33:48.855670] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.222 [2024-06-07 16:33:48.855673] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.222 [2024-06-07 16:33:48.855677] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x160f210) on tqpair=0x158bec0 00:25:22.222 [2024-06-07 16:33:48.855685] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:22.222 0 Kelvin (-273 Celsius) 00:25:22.222 Available Spare: 0% 00:25:22.222 Available Spare Threshold: 0% 00:25:22.222 Life Percentage Used: 0% 00:25:22.222 Data Units Read: 0 00:25:22.222 Data Units Written: 0 00:25:22.222 Host Read Commands: 0 00:25:22.222 Host Write Commands: 0 00:25:22.222 Controller Busy Time: 0 minutes 00:25:22.222 Power Cycles: 0 00:25:22.222 Power On Hours: 0 hours 00:25:22.222 Unsafe Shutdowns: 0 00:25:22.222 Unrecoverable Media Errors: 0 00:25:22.222 Lifetime Error Log Entries: 0 00:25:22.222 Warning Temperature Time: 0 minutes 00:25:22.222 Critical Temperature Time: 0 minutes 00:25:22.222 00:25:22.222 Number of Queues 00:25:22.222 ================ 00:25:22.222 Number of I/O Submission Queues: 127 00:25:22.222 Number of I/O Completion Queues: 127 00:25:22.222 00:25:22.222 Active Namespaces 00:25:22.222 ================= 00:25:22.222 Namespace ID:1 00:25:22.222 Error Recovery Timeout: Unlimited 00:25:22.222 Command Set Identifier: NVM (00h) 00:25:22.222 Deallocate: Supported 00:25:22.222 Deallocated/Unwritten Error: Not Supported 00:25:22.222 Deallocated Read Value: Unknown 00:25:22.222 Deallocate in Write Zeroes: Not Supported 00:25:22.222 Deallocated Guard Field: 0xFFFF 00:25:22.222 Flush: Supported 00:25:22.222 Reservation: Supported 00:25:22.222 Namespace Sharing Capabilities: Multiple Controllers 00:25:22.222 Size (in LBAs): 131072 (0GiB) 00:25:22.222 Capacity (in LBAs): 131072 (0GiB) 00:25:22.223 Utilization (in LBAs): 131072 (0GiB) 00:25:22.223 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:22.223 EUI64: ABCDEF0123456789 00:25:22.223 UUID: 37bf36a2-4da3-496b-bf0f-7e94fe628274 00:25:22.223 Thin Provisioning: Not Supported 00:25:22.223 Per-NS Atomic Units: Yes 00:25:22.223 Atomic Boundary Size (Normal): 0 00:25:22.223 Atomic Boundary Size (PFail): 0 00:25:22.223 Atomic Boundary Offset: 0 00:25:22.223 Maximum Single Source Range Length: 65535 00:25:22.223 Maximum Copy Length: 65535 00:25:22.223 Maximum Source Range Count: 1 00:25:22.223 NGUID/EUI64 Never Reused: No 00:25:22.223 Namespace Write Protected: No 00:25:22.223 Number of LBA Formats: 1 00:25:22.223 Current LBA Format: LBA Format #00 00:25:22.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:22.223 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.223 rmmod nvme_tcp 00:25:22.223 rmmod nvme_fabrics 00:25:22.223 rmmod nvme_keyring 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3207459 ']' 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3207459 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 3207459 ']' 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 3207459 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:22.223 16:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3207459 00:25:22.223 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:22.223 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:22.223 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3207459' 00:25:22.223 killing process with pid 3207459 00:25:22.223 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 3207459 00:25:22.223 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 3207459 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.484 16:33:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.397 16:33:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.397 00:25:24.397 real 0m10.406s 00:25:24.397 user 0m7.537s 00:25:24.397 sys 0m5.350s 00:25:24.397 16:33:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:24.397 16:33:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:24.397 ************************************ 00:25:24.398 END TEST nvmf_identify 00:25:24.398 ************************************ 00:25:24.659 16:33:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:24.659 16:33:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:24.659 16:33:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:24.659 16:33:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.659 ************************************ 00:25:24.659 START TEST nvmf_perf 00:25:24.659 ************************************ 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:24.659 * Looking for test storage... 00:25:24.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.659 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.660 16:33:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:32.812 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:32.812 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:32.812 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:32.812 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:25:32.812 00:25:32.812 --- 10.0.0.2 ping statistics --- 00:25:32.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.812 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:25:32.812 00:25:32.812 --- 10.0.0.1 ping statistics --- 00:25:32.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.812 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:25:32.812 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3211799 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3211799 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 3211799 ']' 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:32.813 16:33:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:32.813 [2024-06-07 16:33:58.595937] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:25:32.813 [2024-06-07 16:33:58.596001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.813 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.813 [2024-06-07 16:33:58.667437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.813 [2024-06-07 16:33:58.742231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.813 [2024-06-07 16:33:58.742271] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.813 [2024-06-07 16:33:58.742278] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.813 [2024-06-07 16:33:58.742285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.813 [2024-06-07 16:33:58.742290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.813 [2024-06-07 16:33:58.742446] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.813 [2024-06-07 16:33:58.742516] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.813 [2024-06-07 16:33:58.742680] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.813 [2024-06-07 16:33:58.742680] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:32.813 16:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:33.073 16:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:33.073 16:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:33.334 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:33.334 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.595 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:33.595 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:33.595 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:33.595 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:33.595 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:33.595 [2024-06-07 16:34:00.389836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.596 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.856 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:33.856 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.118 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:34.118 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:34.118 16:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.378 [2024-06-07 16:34:01.052303] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.378 16:34:01 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:34.639 16:34:01 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:34.639 16:34:01 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:34.639 16:34:01 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:34.639 16:34:01 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:36.024 Initializing NVMe Controllers 00:25:36.024 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:36.024 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:36.024 Initialization complete. Launching workers. 00:25:36.024 ======================================================== 00:25:36.024 Latency(us) 00:25:36.024 Device Information : IOPS MiB/s Average min max 00:25:36.024 PCIE (0000:65:00.0) NSID 1 from core 0: 79202.37 309.38 403.63 13.22 4823.97 00:25:36.024 ======================================================== 00:25:36.024 Total : 79202.37 309.38 403.63 13.22 4823.97 00:25:36.024 00:25:36.024 16:34:02 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.024 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.411 Initializing NVMe Controllers 00:25:37.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.411 Initialization complete. Launching workers. 00:25:37.411 ======================================================== 00:25:37.411 Latency(us) 00:25:37.411 Device Information : IOPS MiB/s Average min max 00:25:37.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 13162.84 428.76 45783.08 00:25:37.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19065.65 7954.67 47890.23 00:25:37.411 ======================================================== 00:25:37.411 Total : 133.00 0.52 15559.47 428.76 47890.23 00:25:37.411 00:25:37.411 16:34:03 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.411 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.858 Initializing NVMe Controllers 00:25:38.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.859 Initialization complete. Launching workers. 00:25:38.859 ======================================================== 00:25:38.859 Latency(us) 00:25:38.859 Device Information : IOPS MiB/s Average min max 00:25:38.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10321.16 40.32 3101.73 476.35 6670.47 00:25:38.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3893.31 15.21 8262.18 6190.52 15713.92 00:25:38.859 ======================================================== 00:25:38.859 Total : 14214.47 55.53 4515.17 476.35 15713.92 00:25:38.859 00:25:38.859 16:34:05 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:38.859 16:34:05 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:38.859 16:34:05 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.859 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.412 Initializing NVMe Controllers 00:25:41.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.412 Controller IO queue size 128, less than required. 00:25:41.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.412 Controller IO queue size 128, less than required. 00:25:41.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:41.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:41.412 Initialization complete. Launching workers. 00:25:41.412 ======================================================== 00:25:41.412 Latency(us) 00:25:41.412 Device Information : IOPS MiB/s Average min max 00:25:41.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 995.37 248.84 133906.45 87370.02 173165.13 00:25:41.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.43 142.36 234225.14 69101.99 390440.46 00:25:41.412 ======================================================== 00:25:41.412 Total : 1564.80 391.20 170412.20 69101.99 390440.46 00:25:41.412 00:25:41.412 16:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.412 No valid NVMe controllers or AIO or URING devices found 00:25:41.412 Initializing NVMe Controllers 00:25:41.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.412 Controller IO queue size 128, less than required. 00:25:41.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.412 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:41.412 Controller IO queue size 128, less than required. 00:25:41.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.412 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:41.412 WARNING: Some requested NVMe devices were skipped 00:25:41.412 16:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.959 Initializing NVMe Controllers 00:25:43.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.959 Controller IO queue size 128, less than required. 00:25:43.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.959 Controller IO queue size 128, less than required. 00:25:43.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:43.959 Initialization complete. Launching workers. 00:25:43.959 00:25:43.959 ==================== 00:25:43.959 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:43.959 TCP transport: 00:25:43.959 polls: 39327 00:25:43.959 idle_polls: 15742 00:25:43.959 sock_completions: 23585 00:25:43.959 nvme_completions: 4229 00:25:43.959 submitted_requests: 6368 00:25:43.959 queued_requests: 1 00:25:43.959 00:25:43.959 ==================== 00:25:43.959 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:43.959 TCP transport: 00:25:43.959 polls: 37883 00:25:43.959 idle_polls: 14858 00:25:43.959 sock_completions: 23025 00:25:43.959 nvme_completions: 4045 00:25:43.959 submitted_requests: 6048 00:25:43.959 queued_requests: 1 00:25:43.959 ======================================================== 00:25:43.959 Latency(us) 00:25:43.959 Device Information : IOPS MiB/s Average min max 00:25:43.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1057.00 264.25 125411.67 63838.11 179317.36 00:25:43.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1011.00 252.75 128476.38 53266.40 190128.74 00:25:43.959 ======================================================== 00:25:43.959 Total : 2068.00 517.00 126909.94 53266.40 190128.74 00:25:43.959 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.959 rmmod nvme_tcp 00:25:43.959 rmmod nvme_fabrics 00:25:43.959 rmmod nvme_keyring 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3211799 ']' 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3211799 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 3211799 ']' 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 3211799 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3211799 00:25:43.959 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:43.960 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:43.960 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3211799' 00:25:43.960 killing process with pid 3211799 00:25:43.960 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 3211799 00:25:43.960 16:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 3211799 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.870 16:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.418 16:34:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.418 00:25:48.418 real 0m23.490s 00:25:48.418 user 0m57.472s 00:25:48.418 sys 0m7.638s 00:25:48.418 16:34:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:48.418 16:34:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:48.418 ************************************ 00:25:48.418 END TEST nvmf_perf 00:25:48.418 ************************************ 00:25:48.418 16:34:14 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:48.418 16:34:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:48.418 16:34:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:48.418 16:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.418 ************************************ 00:25:48.418 START TEST nvmf_fio_host 00:25:48.418 ************************************ 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:48.418 * Looking for test storage... 00:25:48.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.418 16:34:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:48.419 16:34:14 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.419 16:34:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:55.011 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:55.011 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.011 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:55.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:55.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.012 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.272 16:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.272 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.272 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.272 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.273 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.273 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:25:55.533 00:25:55.533 --- 10.0.0.2 ping statistics --- 00:25:55.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.533 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:25:55.533 00:25:55.533 --- 10.0.0.1 ping statistics --- 00:25:55.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.533 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:55.533 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3218670 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3218670 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 3218670 ']' 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:55.534 16:34:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.534 [2024-06-07 16:34:22.272241] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:25:55.534 [2024-06-07 16:34:22.272308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.534 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.534 [2024-06-07 16:34:22.344662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.793 [2024-06-07 16:34:22.421981] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.793 [2024-06-07 16:34:22.422020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.793 [2024-06-07 16:34:22.422028] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.793 [2024-06-07 16:34:22.422034] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.793 [2024-06-07 16:34:22.422040] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.793 [2024-06-07 16:34:22.422177] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.793 [2024-06-07 16:34:22.422314] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.793 [2024-06-07 16:34:22.422471] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.793 [2024-06-07 16:34:22.422679] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.364 16:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:56.364 16:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:25:56.364 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:56.364 [2024-06-07 16:34:23.185248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.623 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:56.623 16:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:56.623 16:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.623 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:56.623 Malloc1 00:25:56.623 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.883 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:57.144 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.144 [2024-06-07 16:34:23.914715] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.144 16:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:57.404 16:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:57.664 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:57.664 fio-3.35 00:25:57.664 Starting 1 thread 00:25:57.664 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.206 00:26:00.206 test: (groupid=0, jobs=1): err= 0: pid=3219385: Fri Jun 7 16:34:26 2024 00:26:00.206 read: IOPS=13.5k, BW=52.7MiB/s (55.2MB/s)(106MiB/2004msec) 00:26:00.206 slat (usec): min=2, max=272, avg= 2.19, stdev= 2.36 00:26:00.206 clat (usec): min=3342, max=9249, avg=5230.99, stdev=386.96 00:26:00.206 lat (usec): min=3376, max=9251, avg=5233.17, stdev=387.03 00:26:00.206 clat percentiles (usec): 00:26:00.206 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4948], 00:26:00.206 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5342], 00:26:00.206 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5800], 00:26:00.206 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8291], 00:26:00.206 | 99.99th=[ 9241] 00:26:00.206 bw ( KiB/s): min=52728, max=54320, per=99.91%, avg=53908.00, stdev=786.78, samples=4 00:26:00.206 iops : min=13182, max=13580, avg=13477.00, stdev=196.70, samples=4 00:26:00.206 write: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(105MiB/2004msec); 0 zone resets 00:26:00.206 slat (usec): min=2, max=212, avg= 2.28, stdev= 1.49 00:26:00.206 clat (usec): min=2710, max=7985, avg=4215.07, stdev=312.01 00:26:00.206 lat (usec): min=2712, max=7987, avg=4217.34, stdev=312.12 00:26:00.206 clat percentiles (usec): 00:26:00.206 | 1.00th=[ 3490], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3982], 00:26:00.206 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:00.206 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:26:00.206 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 6456], 99.95th=[ 6783], 00:26:00.206 | 99.99th=[ 7767] 00:26:00.206 bw ( KiB/s): min=53040, max=54248, per=100.00%, avg=53900.00, stdev=579.05, samples=4 00:26:00.206 iops : min=13260, max=13562, avg=13475.00, stdev=144.76, samples=4 00:26:00.206 lat (msec) : 4=10.83%, 10=89.17% 00:26:00.206 cpu : usr=65.70%, sys=29.31%, ctx=28, majf=0, minf=6 00:26:00.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:00.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:00.206 issued rwts: total=27031,27000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:00.206 00:26:00.206 Run status group 0 (all jobs): 00:26:00.206 READ: bw=52.7MiB/s (55.2MB/s), 52.7MiB/s-52.7MiB/s (55.2MB/s-55.2MB/s), io=106MiB (111MB), run=2004-2004msec 00:26:00.206 WRITE: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=105MiB (111MB), run=2004-2004msec 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:26:00.206 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:26:00.207 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:00.207 16:34:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:00.467 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:00.467 fio-3.35 00:26:00.467 Starting 1 thread 00:26:00.727 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.269 00:26:03.269 test: (groupid=0, jobs=1): err= 0: pid=3219997: Fri Jun 7 16:34:29 2024 00:26:03.269 read: IOPS=8950, BW=140MiB/s (147MB/s)(281MiB/2007msec) 00:26:03.269 slat (usec): min=3, max=112, avg= 3.64, stdev= 1.60 00:26:03.269 clat (usec): min=1593, max=20887, avg=8742.41, stdev=2273.25 00:26:03.269 lat (usec): min=1596, max=20890, avg=8746.05, stdev=2273.37 00:26:03.269 clat percentiles (usec): 00:26:03.269 | 1.00th=[ 4424], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6718], 00:26:03.269 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:26:03.269 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[11731], 95.00th=[12387], 00:26:03.269 | 99.00th=[14353], 99.50th=[15926], 99.90th=[17957], 99.95th=[20579], 00:26:03.269 | 99.99th=[20841] 00:26:03.269 bw ( KiB/s): min=61344, max=85365, per=49.33%, avg=70653.25, stdev=11213.49, samples=4 00:26:03.269 iops : min= 3834, max= 5335, avg=4415.75, stdev=700.71, samples=4 00:26:03.269 write: IOPS=5257, BW=82.1MiB/s (86.1MB/s)(144MiB/1752msec); 0 zone resets 00:26:03.269 slat (usec): min=40, max=323, avg=41.00, stdev= 6.52 00:26:03.269 clat (usec): min=3426, max=15634, avg=9554.64, stdev=1548.13 00:26:03.269 lat (usec): min=3466, max=15674, avg=9595.64, stdev=1549.35 00:26:03.269 clat percentiles (usec): 00:26:03.269 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8291], 00:26:03.269 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:26:03.269 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:26:03.269 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14484], 99.95th=[14746], 00:26:03.269 | 99.99th=[15664] 00:26:03.269 bw ( KiB/s): min=64352, max=88910, per=87.55%, avg=73643.50, stdev=11740.73, samples=4 00:26:03.269 iops : min= 4022, max= 5556, avg=4602.50, stdev=733.42, samples=4 00:26:03.269 lat (msec) : 2=0.03%, 4=0.31%, 10=69.82%, 20=29.78%, 50=0.06% 00:26:03.269 cpu : usr=82.90%, sys=14.21%, ctx=11, majf=0, minf=17 00:26:03.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:03.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:03.269 issued rwts: total=17964,9211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:03.269 00:26:03.269 Run status group 0 (all jobs): 00:26:03.270 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (294MB), run=2007-2007msec 00:26:03.270 WRITE: bw=82.1MiB/s (86.1MB/s), 82.1MiB/s-82.1MiB/s (86.1MB/s-86.1MB/s), io=144MiB (151MB), run=1752-1752msec 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:03.270 rmmod nvme_tcp 00:26:03.270 rmmod nvme_fabrics 00:26:03.270 rmmod nvme_keyring 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3218670 ']' 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3218670 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 3218670 ']' 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 3218670 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3218670 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3218670' 00:26:03.270 killing process with pid 3218670 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 3218670 00:26:03.270 16:34:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 3218670 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.270 16:34:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.853 16:34:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:05.853 00:26:05.853 real 0m17.263s 00:26:05.853 user 1m7.232s 00:26:05.853 sys 0m7.435s 00:26:05.853 16:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:05.853 16:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.853 ************************************ 00:26:05.853 END TEST nvmf_fio_host 00:26:05.853 ************************************ 00:26:05.853 16:34:32 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:05.853 16:34:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:05.853 16:34:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:05.853 16:34:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.853 ************************************ 00:26:05.853 START TEST nvmf_failover 00:26:05.853 ************************************ 00:26:05.853 16:34:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:05.853 * Looking for test storage... 00:26:05.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.853 16:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.853 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:05.853 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.853 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:05.854 16:34:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:12.441 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:12.441 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:12.441 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:12.441 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.441 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:26:12.702 00:26:12.702 --- 10.0.0.2 ping statistics --- 00:26:12.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.702 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:26:12.702 00:26:12.702 --- 10.0.0.1 ping statistics --- 00:26:12.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.702 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3224547 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3224547 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:12.702 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3224547 ']' 00:26:12.703 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.703 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:12.703 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.703 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:12.703 16:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.703 [2024-06-07 16:34:39.517323] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:26:12.703 [2024-06-07 16:34:39.517389] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.703 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.963 [2024-06-07 16:34:39.604031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:12.963 [2024-06-07 16:34:39.697332] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.963 [2024-06-07 16:34:39.697390] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.963 [2024-06-07 16:34:39.697398] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.963 [2024-06-07 16:34:39.697412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.963 [2024-06-07 16:34:39.697418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.963 [2024-06-07 16:34:39.697541] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.963 [2024-06-07 16:34:39.697843] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.963 [2024-06-07 16:34:39.697843] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.533 16:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:13.793 [2024-06-07 16:34:40.467887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.793 16:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:14.055 Malloc0 00:26:14.055 16:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.055 16:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.326 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.326 [2024-06-07 16:34:41.147953] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.326 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.586 [2024-06-07 16:34:41.316392] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:14.586 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:14.847 [2024-06-07 16:34:41.476922] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3224911 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3224911 /var/tmp/bdevperf.sock 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3224911 ']' 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:14.847 16:34:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.787 16:34:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:15.788 16:34:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:15.788 16:34:42 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.048 NVMe0n1 00:26:16.048 16:34:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.308 00:26:16.308 16:34:42 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:16.308 16:34:42 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3225244 00:26:16.308 16:34:42 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:17.267 16:34:43 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.268 [2024-06-07 16:34:44.093127] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093171] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093177] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093182] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093186] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093191] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093195] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093200] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093204] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093209] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093213] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093218] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093222] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.268 [2024-06-07 16:34:44.093226] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x63e9e0 is same with the state(5) to be set 00:26:17.527 16:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:20.834 16:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:20.834 00:26:20.834 16:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:21.095 [2024-06-07 16:34:47.706469] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706504] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706510] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706515] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706525] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706529] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706534] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706539] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706543] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706547] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706552] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706556] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706561] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706565] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706570] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706574] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706578] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706583] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706587] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706591] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706596] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706600] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.095 [2024-06-07 16:34:47.706605] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706609] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706613] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706618] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706623] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706627] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706631] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706635] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706640] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706646] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706651] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706655] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706659] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706663] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706668] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706672] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706677] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706682] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706686] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706690] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706695] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706700] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706704] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706708] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706712] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706717] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706722] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706726] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706731] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706735] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706740] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706745] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706750] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706754] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706758] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706763] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706769] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706773] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706778] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706782] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706787] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706791] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706796] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706801] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706806] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706811] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706815] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706820] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706825] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706829] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706834] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706838] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706844] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706848] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706853] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706857] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706862] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706866] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706870] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706875] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706879] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706884] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706888] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706893] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706898] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706903] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706907] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.096 [2024-06-07 16:34:47.706912] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706916] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706921] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706926] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706930] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706935] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706939] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706943] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706947] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706952] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706956] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706960] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706965] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706969] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706973] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706977] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706982] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706986] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706990] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706995] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.706999] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707004] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707008] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707012] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707018] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707022] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707026] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707031] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707035] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707039] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707043] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707048] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707052] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707056] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707061] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707065] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 [2024-06-07 16:34:47.707069] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6400e0 is same with the state(5) to be set 00:26:21.097 16:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:24.395 16:34:50 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.395 [2024-06-07 16:34:50.879191] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.395 16:34:50 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:25.335 16:34:51 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:25.335 [2024-06-07 16:34:52.052700] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052734] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052739] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052744] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052748] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052753] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052758] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052762] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052766] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052770] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052781] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052785] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052789] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052794] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052798] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052802] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052807] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052811] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052816] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052820] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052824] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052829] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052833] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052837] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052841] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052846] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052850] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052854] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052859] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052864] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052868] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052873] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052877] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052882] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052886] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.335 [2024-06-07 16:34:52.052891] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052895] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052900] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052904] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052909] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052913] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052917] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052922] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052926] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052930] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052934] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052939] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052944] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052948] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052952] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052956] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052961] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052965] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052969] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052974] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052978] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052982] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052986] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052991] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.052996] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053000] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053005] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053009] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053014] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053019] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053023] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053028] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053033] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053037] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053042] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053046] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053051] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053056] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053060] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053065] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053070] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053074] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053079] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053084] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053088] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053093] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053097] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053102] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053106] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053111] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053116] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053120] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053125] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053130] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053134] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053139] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053144] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053149] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053153] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053158] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053163] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053167] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053171] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053175] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053180] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053184] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053188] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053193] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053197] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053201] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053205] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053210] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053214] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053218] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053223] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053227] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053231] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053235] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053240] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053245] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053249] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053253] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053258] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053262] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053267] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.336 [2024-06-07 16:34:52.053272] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053277] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053281] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053285] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053289] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053294] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053298] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 [2024-06-07 16:34:52.053302] tcp.c:1617:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6407c0 is same with the state(5) to be set 00:26:25.337 16:34:52 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3225244 00:26:31.931 0 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3224911 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3224911 ']' 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3224911 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3224911 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3224911' 00:26:31.931 killing process with pid 3224911 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3224911 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3224911 00:26:31.931 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.931 [2024-06-07 16:34:41.543563] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:26:31.931 [2024-06-07 16:34:41.543619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3224911 ] 00:26:31.931 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.931 [2024-06-07 16:34:41.602799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.931 [2024-06-07 16:34:41.666991] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.931 Running I/O for 15 seconds... 00:26:31.931 [2024-06-07 16:34:44.093704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.931 [2024-06-07 16:34:44.093982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.931 [2024-06-07 16:34:44.093991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.093997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.932 [2024-06-07 16:34:44.094571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.932 [2024-06-07 16:34:44.094580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.094983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.094992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.933 [2024-06-07 16:34:44.095163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.933 [2024-06-07 16:34:44.095172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.934 [2024-06-07 16:34:44.095605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.934 [2024-06-07 16:34:44.095748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.934 [2024-06-07 16:34:44.095756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.935 [2024-06-07 16:34:44.095763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.935 [2024-06-07 16:34:44.095781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.935 [2024-06-07 16:34:44.095797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.935 [2024-06-07 16:34:44.095813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.935 [2024-06-07 16:34:44.095840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.935 [2024-06-07 16:34:44.095846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:26:31.935 [2024-06-07 16:34:44.095853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095888] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12d1e50 was disconnected and freed. reset controller. 00:26:31.935 [2024-06-07 16:34:44.095898] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:31.935 [2024-06-07 16:34:44.095917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.935 [2024-06-07 16:34:44.095925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.935 [2024-06-07 16:34:44.095940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.935 [2024-06-07 16:34:44.095954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.935 [2024-06-07 16:34:44.095969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:44.095976] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.935 [2024-06-07 16:34:44.099579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.935 [2024-06-07 16:34:44.099603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b3140 (9): Bad file descriptor 00:26:31.935 [2024-06-07 16:34:44.143099] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:31.935 [2024-06-07 16:34:47.707607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.935 [2024-06-07 16:34:47.707988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.935 [2024-06-07 16:34:47.707997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-06-07 16:34:47.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.936 [2024-06-07 16:34:47.708305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.937 [2024-06-07 16:34:47.708718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.937 [2024-06-07 16:34:47.708727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.708986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.708993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-06-07 16:34:47.709145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-06-07 16:34:47.709152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.939 [2024-06-07 16:34:47.709444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-06-07 16:34:47.709570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.939 [2024-06-07 16:34:47.709579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.940 [2024-06-07 16:34:47.709695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.940 [2024-06-07 16:34:47.709721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.940 [2024-06-07 16:34:47.709729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43992 len:8 PRP1 0x0 PRP2 0x0 00:26:31.940 [2024-06-07 16:34:47.709736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709774] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12d3e80 was disconnected and freed. reset controller. 00:26:31.940 [2024-06-07 16:34:47.709784] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:31.940 [2024-06-07 16:34:47.709803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.940 [2024-06-07 16:34:47.709811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.940 [2024-06-07 16:34:47.709827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.940 [2024-06-07 16:34:47.709842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.940 [2024-06-07 16:34:47.709857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:47.709864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.940 [2024-06-07 16:34:47.709887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b3140 (9): Bad file descriptor 00:26:31.940 [2024-06-07 16:34:47.713511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.940 [2024-06-07 16:34:47.763603] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:31.940 [2024-06-07 16:34:52.053539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-06-07 16:34:52.053577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:52.053593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-06-07 16:34:52.053601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.940 [2024-06-07 16:34:52.053611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.940 [2024-06-07 16:34:52.053618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.053987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.053994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.054003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.054010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.054019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.054026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.941 [2024-06-07 16:34:52.054035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.941 [2024-06-07 16:34:52.054042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.942 [2024-06-07 16:34:52.054484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.942 [2024-06-07 16:34:52.054491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.943 [2024-06-07 16:34:52.054802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.943 [2024-06-07 16:34:52.054811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.054989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.054996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.944 [2024-06-07 16:34:52.055233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.944 [2024-06-07 16:34:52.055240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.945 [2024-06-07 16:34:52.055373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.945 [2024-06-07 16:34:52.055389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.945 [2024-06-07 16:34:52.055654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.945 [2024-06-07 16:34:52.055676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.945 [2024-06-07 16:34:52.055683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.945 [2024-06-07 16:34:52.055689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57784 len:8 PRP1 0x0 PRP2 0x0 00:26:31.946 [2024-06-07 16:34:52.055697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.946 [2024-06-07 16:34:52.055733] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x147c630 was disconnected and freed. reset controller. 00:26:31.946 [2024-06-07 16:34:52.055742] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:31.946 [2024-06-07 16:34:52.055762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.946 [2024-06-07 16:34:52.055770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.946 [2024-06-07 16:34:52.055778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.946 [2024-06-07 16:34:52.055785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.946 [2024-06-07 16:34:52.055795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.946 [2024-06-07 16:34:52.055803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.946 [2024-06-07 16:34:52.055811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.946 [2024-06-07 16:34:52.055817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.946 [2024-06-07 16:34:52.055825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.946 [2024-06-07 16:34:52.059418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.946 [2024-06-07 16:34:52.059442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b3140 (9): Bad file descriptor 00:26:31.946 [2024-06-07 16:34:52.112021] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:31.946 00:26:31.946 Latency(us) 00:26:31.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.946 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:31.946 Verification LBA range: start 0x0 length 0x4000 00:26:31.946 NVMe0n1 : 15.01 11434.19 44.66 249.28 0.00 10927.71 969.39 15728.64 00:26:31.946 =================================================================================================================== 00:26:31.946 Total : 11434.19 44.66 249.28 0.00 10927.71 969.39 15728.64 00:26:31.946 Received shutdown signal, test time was about 15.000000 seconds 00:26:31.946 00:26:31.946 Latency(us) 00:26:31.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.946 =================================================================================================================== 00:26:31.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3228257 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3228257 /var/tmp/bdevperf.sock 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3228257 ']' 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:31.946 16:34:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:32.267 16:34:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:32.267 16:34:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:26:32.267 16:34:59 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:32.527 [2024-06-07 16:34:59.253723] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:32.527 16:34:59 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:32.787 [2024-06-07 16:34:59.422142] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:32.787 16:34:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:33.048 NVMe0n1 00:26:33.048 16:34:59 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:33.307 00:26:33.307 16:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:33.877 00:26:33.877 16:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:33.877 16:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:33.877 16:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:34.138 16:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:37.441 16:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:37.441 16:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:37.441 16:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:37.441 16:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3229284 00:26:37.441 16:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3229284 00:26:38.381 0 00:26:38.381 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:38.381 [2024-06-07 16:34:58.333162] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:26:38.381 [2024-06-07 16:34:58.333219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228257 ] 00:26:38.381 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.381 [2024-06-07 16:34:58.391849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.381 [2024-06-07 16:34:58.453296] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.381 [2024-06-07 16:35:00.818194] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:38.381 [2024-06-07 16:35:00.818241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.381 [2024-06-07 16:35:00.818252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.381 [2024-06-07 16:35:00.818262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.381 [2024-06-07 16:35:00.818269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.381 [2024-06-07 16:35:00.818277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.381 [2024-06-07 16:35:00.818284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.381 [2024-06-07 16:35:00.818292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.381 [2024-06-07 16:35:00.818299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.381 [2024-06-07 16:35:00.818306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:38.381 [2024-06-07 16:35:00.818333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:38.381 [2024-06-07 16:35:00.818347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1242140 (9): Bad file descriptor 00:26:38.381 [2024-06-07 16:35:00.866766] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:38.381 Running I/O for 1 seconds... 00:26:38.381 00:26:38.381 Latency(us) 00:26:38.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.381 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:38.381 Verification LBA range: start 0x0 length 0x4000 00:26:38.381 NVMe0n1 : 1.01 11597.06 45.30 0.00 0.00 10962.42 1235.63 11359.57 00:26:38.381 =================================================================================================================== 00:26:38.381 Total : 11597.06 45.30 0.00 0.00 10962.42 1235.63 11359.57 00:26:38.381 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.381 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:38.641 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.641 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.641 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:38.901 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:39.161 16:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3228257 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3228257 ']' 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3228257 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:42.454 16:35:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3228257 00:26:42.454 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:42.455 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:42.455 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3228257' 00:26:42.455 killing process with pid 3228257 00:26:42.455 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3228257 00:26:42.455 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3228257 00:26:42.455 16:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:42.455 16:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.715 rmmod nvme_tcp 00:26:42.715 rmmod nvme_fabrics 00:26:42.715 rmmod nvme_keyring 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3224547 ']' 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3224547 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3224547 ']' 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3224547 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3224547 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3224547' 00:26:42.715 killing process with pid 3224547 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3224547 00:26:42.715 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3224547 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.975 16:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.891 16:35:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.891 00:26:44.891 real 0m39.415s 00:26:44.891 user 2m1.914s 00:26:44.891 sys 0m8.081s 00:26:44.891 16:35:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:44.891 16:35:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:44.891 ************************************ 00:26:44.891 END TEST nvmf_failover 00:26:44.891 ************************************ 00:26:44.891 16:35:11 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:44.891 16:35:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:44.891 16:35:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:44.891 16:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:44.891 ************************************ 00:26:44.891 START TEST nvmf_host_discovery 00:26:44.891 ************************************ 00:26:44.891 16:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:45.153 * Looking for test storage... 00:26:45.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.153 16:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:51.747 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:51.747 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:51.747 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:51.747 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.747 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.748 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.748 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.748 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.748 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.748 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.748 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:26:52.010 00:26:52.010 --- 10.0.0.2 ping statistics --- 00:26:52.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.010 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:26:52.010 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:26:52.010 00:26:52.010 --- 10.0.0.1 ping statistics --- 00:26:52.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.010 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3234484 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3234484 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 3234484 ']' 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:52.011 16:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.011 [2024-06-07 16:35:18.855586] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:26:52.011 [2024-06-07 16:35:18.855647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.272 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.272 [2024-06-07 16:35:18.944498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.272 [2024-06-07 16:35:19.035320] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.272 [2024-06-07 16:35:19.035378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.272 [2024-06-07 16:35:19.035386] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.273 [2024-06-07 16:35:19.035393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.273 [2024-06-07 16:35:19.035400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.273 [2024-06-07 16:35:19.035432] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.846 [2024-06-07 16:35:19.686607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:52.846 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.846 [2024-06-07 16:35:19.698820] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 null0 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 null1 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3234633 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3234633 /tmp/host.sock 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 3234633 ']' 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:53.107 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:53.107 16:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.107 [2024-06-07 16:35:19.804949] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:26:53.107 [2024-06-07 16:35:19.805014] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234633 ] 00:26:53.107 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.107 [2024-06-07 16:35:19.868441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.107 [2024-06-07 16:35:19.942958] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.048 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.049 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.310 [2024-06-07 16:35:20.934058] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.310 16:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:54.310 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:26:54.311 16:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:26:54.883 [2024-06-07 16:35:21.626627] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:54.883 [2024-06-07 16:35:21.626648] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:54.883 [2024-06-07 16:35:21.626664] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:55.144 [2024-06-07 16:35:21.755073] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:55.144 [2024-06-07 16:35:21.981084] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:55.144 [2024-06-07 16:35:21.981107] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:55.404 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.405 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:55.405 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:55.405 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.405 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.405 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:55.405 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.665 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.666 [2024-06-07 16:35:22.486179] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:55.666 [2024-06-07 16:35:22.487080] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:55.666 [2024-06-07 16:35:22.487107] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:55.666 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.927 [2024-06-07 16:35:22.616874] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:55.927 16:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:26:56.219 [2024-06-07 16:35:22.925277] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:56.219 [2024-06-07 16:35:22.925295] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:56.219 [2024-06-07 16:35:22.925301] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.162 [2024-06-07 16:35:23.765751] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:57.162 [2024-06-07 16:35:23.765773] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.162 [2024-06-07 16:35:23.769083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.162 [2024-06-07 16:35:23.769101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.162 [2024-06-07 16:35:23.769110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.162 [2024-06-07 16:35:23.769118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.162 [2024-06-07 16:35:23.769126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.162 [2024-06-07 16:35:23.769133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.162 [2024-06-07 16:35:23.769141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.162 [2024-06-07 16:35:23.769148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.162 [2024-06-07 16:35:23.769159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.162 [2024-06-07 16:35:23.779097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.162 [2024-06-07 16:35:23.789138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.162 [2024-06-07 16:35:23.789630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.162 [2024-06-07 16:35:23.789667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.162 [2024-06-07 16:35:23.789679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.162 [2024-06-07 16:35:23.789699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.162 [2024-06-07 16:35:23.789711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.162 [2024-06-07 16:35:23.789718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.162 [2024-06-07 16:35:23.789726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.162 [2024-06-07 16:35:23.789741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.162 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.162 [2024-06-07 16:35:23.799194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.162 [2024-06-07 16:35:23.799667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.162 [2024-06-07 16:35:23.799704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.162 [2024-06-07 16:35:23.799714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.162 [2024-06-07 16:35:23.799733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.162 [2024-06-07 16:35:23.799745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.162 [2024-06-07 16:35:23.799752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.162 [2024-06-07 16:35:23.799759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.162 [2024-06-07 16:35:23.799779] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.162 [2024-06-07 16:35:23.809249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.162 [2024-06-07 16:35:23.809711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.162 [2024-06-07 16:35:23.809748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.162 [2024-06-07 16:35:23.809759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.162 [2024-06-07 16:35:23.809777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.162 [2024-06-07 16:35:23.809788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.162 [2024-06-07 16:35:23.809795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.162 [2024-06-07 16:35:23.809803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.162 [2024-06-07 16:35:23.809818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.162 [2024-06-07 16:35:23.819306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.162 [2024-06-07 16:35:23.819753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.162 [2024-06-07 16:35:23.819767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.162 [2024-06-07 16:35:23.819775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.162 [2024-06-07 16:35:23.819786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.162 [2024-06-07 16:35:23.819796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.163 [2024-06-07 16:35:23.819803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.163 [2024-06-07 16:35:23.819810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.163 [2024-06-07 16:35:23.819821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:26:57.163 [2024-06-07 16:35:23.829362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.163 [2024-06-07 16:35:23.829515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.163 [2024-06-07 16:35:23.829528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.163 [2024-06-07 16:35:23.829535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.163 [2024-06-07 16:35:23.829546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.163 [2024-06-07 16:35:23.829561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.163 [2024-06-07 16:35:23.829567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.163 [2024-06-07 16:35:23.829574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.163 [2024-06-07 16:35:23.829585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.163 [2024-06-07 16:35:23.839416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.163 [2024-06-07 16:35:23.839817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.163 [2024-06-07 16:35:23.839830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.163 [2024-06-07 16:35:23.839837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.163 [2024-06-07 16:35:23.839849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.163 [2024-06-07 16:35:23.839859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.163 [2024-06-07 16:35:23.839866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.163 [2024-06-07 16:35:23.839873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.163 [2024-06-07 16:35:23.839884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.163 [2024-06-07 16:35:23.849471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.163 [2024-06-07 16:35:23.849691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.163 [2024-06-07 16:35:23.849703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1527740 with addr=10.0.0.2, port=4420 00:26:57.163 [2024-06-07 16:35:23.849710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527740 is same with the state(5) to be set 00:26:57.163 [2024-06-07 16:35:23.849721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1527740 (9): Bad file descriptor 00:26:57.163 [2024-06-07 16:35:23.849731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:57.163 [2024-06-07 16:35:23.849737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:57.163 [2024-06-07 16:35:23.849744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:57.163 [2024-06-07 16:35:23.849754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.163 [2024-06-07 16:35:23.853034] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:57.163 [2024-06-07 16:35:23.853053] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:57.163 16:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.163 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.424 16:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.365 [2024-06-07 16:35:25.174392] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.365 [2024-06-07 16:35:25.174411] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.365 [2024-06-07 16:35:25.174424] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.626 [2024-06-07 16:35:25.264721] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:58.626 [2024-06-07 16:35:25.327627] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:58.626 [2024-06-07 16:35:25.327658] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.626 request: 00:26:58.626 { 00:26:58.626 "name": "nvme", 00:26:58.626 "trtype": "tcp", 00:26:58.626 "traddr": "10.0.0.2", 00:26:58.626 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:58.626 "adrfam": "ipv4", 00:26:58.626 "trsvcid": "8009", 00:26:58.626 "wait_for_attach": true, 00:26:58.626 "method": "bdev_nvme_start_discovery", 00:26:58.626 "req_id": 1 00:26:58.626 } 00:26:58.626 Got JSON-RPC error response 00:26:58.626 response: 00:26:58.626 { 00:26:58.626 "code": -17, 00:26:58.626 "message": "File exists" 00:26:58.626 } 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.626 request: 00:26:58.626 { 00:26:58.626 "name": "nvme_second", 00:26:58.626 "trtype": "tcp", 00:26:58.626 "traddr": "10.0.0.2", 00:26:58.626 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:58.626 "adrfam": "ipv4", 00:26:58.626 "trsvcid": "8009", 00:26:58.626 "wait_for_attach": true, 00:26:58.626 "method": "bdev_nvme_start_discovery", 00:26:58.626 "req_id": 1 00:26:58.626 } 00:26:58.626 Got JSON-RPC error response 00:26:58.626 response: 00:26:58.626 { 00:26:58.626 "code": -17, 00:26:58.626 "message": "File exists" 00:26:58.626 } 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:58.626 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.887 16:35:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.829 [2024-06-07 16:35:26.599152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.829 [2024-06-07 16:35:26.599180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15237b0 with addr=10.0.0.2, port=8010 00:26:59.829 [2024-06-07 16:35:26.599192] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:59.829 [2024-06-07 16:35:26.599199] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:59.829 [2024-06-07 16:35:26.599206] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:00.770 [2024-06-07 16:35:27.601572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.770 [2024-06-07 16:35:27.601594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15237b0 with addr=10.0.0.2, port=8010 00:27:00.770 [2024-06-07 16:35:27.601605] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:00.770 [2024-06-07 16:35:27.601612] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:00.770 [2024-06-07 16:35:27.601618] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:02.156 [2024-06-07 16:35:28.603506] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:02.156 request: 00:27:02.156 { 00:27:02.156 "name": "nvme_second", 00:27:02.156 "trtype": "tcp", 00:27:02.156 "traddr": "10.0.0.2", 00:27:02.156 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:02.156 "adrfam": "ipv4", 00:27:02.156 "trsvcid": "8010", 00:27:02.156 "attach_timeout_ms": 3000, 00:27:02.156 "method": "bdev_nvme_start_discovery", 00:27:02.156 "req_id": 1 00:27:02.156 } 00:27:02.156 Got JSON-RPC error response 00:27:02.156 response: 00:27:02.156 { 00:27:02.156 "code": -110, 00:27:02.156 "message": "Connection timed out" 00:27:02.156 } 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3234633 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.156 rmmod nvme_tcp 00:27:02.156 rmmod nvme_fabrics 00:27:02.156 rmmod nvme_keyring 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3234484 ']' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3234484 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 3234484 ']' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 3234484 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3234484 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3234484' 00:27:02.156 killing process with pid 3234484 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 3234484 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 3234484 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.156 16:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.700 16:35:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.700 00:27:04.700 real 0m19.248s 00:27:04.700 user 0m22.629s 00:27:04.700 sys 0m6.632s 00:27:04.700 16:35:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:04.700 16:35:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.700 ************************************ 00:27:04.700 END TEST nvmf_host_discovery 00:27:04.700 ************************************ 00:27:04.700 16:35:31 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:04.700 16:35:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:04.700 16:35:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:04.700 16:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.700 ************************************ 00:27:04.700 START TEST nvmf_host_multipath_status 00:27:04.700 ************************************ 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:04.700 * Looking for test storage... 00:27:04.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.700 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.701 16:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.283 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:11.284 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:11.284 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.284 16:35:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:11.284 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:11.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.284 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:27:11.544 00:27:11.544 --- 10.0.0.2 ping statistics --- 00:27:11.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.544 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:27:11.544 00:27:11.544 --- 10.0.0.1 ping statistics --- 00:27:11.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.544 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3240719 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3240719 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 3240719 ']' 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:11.544 16:35:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:11.804 [2024-06-07 16:35:38.420230] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:27:11.804 [2024-06-07 16:35:38.420298] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.804 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.804 [2024-06-07 16:35:38.492002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:11.804 [2024-06-07 16:35:38.567949] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.804 [2024-06-07 16:35:38.567985] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.804 [2024-06-07 16:35:38.567993] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.804 [2024-06-07 16:35:38.568000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.804 [2024-06-07 16:35:38.568005] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.804 [2024-06-07 16:35:38.568143] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.804 [2024-06-07 16:35:38.568144] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.374 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:12.374 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:27:12.374 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.374 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:12.374 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:12.635 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.635 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3240719 00:27:12.635 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:12.635 [2024-06-07 16:35:39.372581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.635 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:12.894 Malloc0 00:27:12.894 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:12.894 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:13.155 16:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.155 [2024-06-07 16:35:39.990064] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.155 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:13.414 [2024-06-07 16:35:40.134424] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3241095 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3241095 /var/tmp/bdevperf.sock 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 3241095 ']' 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.414 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:13.415 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:14.353 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:14.353 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:27:14.353 16:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:14.353 16:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:14.613 Nvme0n1 00:27:14.613 16:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:15.184 Nvme0n1 00:27:15.184 16:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:15.184 16:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:17.089 16:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:17.089 16:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:17.350 16:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:17.610 16:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:18.548 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.549 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.808 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.808 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.808 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.808 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:19.104 16:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.363 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.363 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:19.363 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.363 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.623 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.623 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:19.623 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.624 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.884 16:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:20.824 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:20.824 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:20.824 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.824 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:21.083 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.083 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:21.083 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.084 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:21.084 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.084 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:21.084 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.084 16:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:21.344 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.344 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:21.344 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.344 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.604 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.864 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.864 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:21.864 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:22.124 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:22.124 16:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:23.064 16:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:23.064 16:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:23.064 16:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.064 16:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.324 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.324 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:23.324 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.324 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.582 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.582 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.582 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.583 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.583 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.583 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.583 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.583 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.848 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.848 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.848 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.848 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:24.162 16:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:24.420 16:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:24.420 16:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.800 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.060 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.060 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:26.060 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.060 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.320 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.320 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.320 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.320 16:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.320 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.320 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:26.320 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.320 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.581 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.581 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:26.581 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:26.581 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:26.842 16:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:27.784 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:27.784 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:27.784 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.784 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:28.045 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.045 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:28.045 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.045 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.313 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.313 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.313 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.313 16:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.313 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.313 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.313 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.313 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.579 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.579 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:28.579 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.579 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:28.839 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:29.100 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:29.100 16:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:30.484 16:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:30.484 16:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:30.484 16:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.484 16:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.484 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:30.745 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.745 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:30.745 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.745 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.005 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.266 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.266 16:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:31.266 16:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:31.266 16:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:31.526 16:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:31.785 16:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:32.727 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:32.727 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:32.727 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.727 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.988 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.248 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.248 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.248 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.248 16:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.508 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:33.768 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.768 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:33.768 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:34.029 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:34.029 16:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:34.970 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:34.970 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:34.970 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.970 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:35.232 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:35.232 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:35.232 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.232 16:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:35.492 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.493 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:35.753 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.753 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:35.753 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.753 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:36.013 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.013 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:36.013 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.013 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:36.013 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.013 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:36.014 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:36.274 16:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:36.547 16:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.569 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.830 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:38.090 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.090 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:38.090 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.090 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.350 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.350 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:38.350 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.350 16:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:38.350 16:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.350 16:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:38.350 16:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:38.610 16:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:38.870 16:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:39.813 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:39.813 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:39.813 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.813 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.074 16:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.334 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.334 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.334 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.334 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.334 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.594 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.594 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.594 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:40.594 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.595 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:40.595 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.595 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3241095 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 3241095 ']' 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 3241095 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3241095 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3241095' 00:27:40.855 killing process with pid 3241095 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 3241095 00:27:40.855 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 3241095 00:27:40.855 Connection closed with partial response: 00:27:40.855 00:27:40.855 00:27:41.118 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3241095 00:27:41.118 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:41.118 [2024-06-07 16:35:40.198550] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:27:41.118 [2024-06-07 16:35:40.198606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241095 ] 00:27:41.118 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.118 [2024-06-07 16:35:40.248273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.118 [2024-06-07 16:35:40.300630] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.118 Running I/O for 90 seconds... 00:27:41.118 [2024-06-07 16:35:53.391604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:41.118 [2024-06-07 16:35:53.391762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.118 [2024-06-07 16:35:53.391767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.391892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.391897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.392985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.392999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.119 [2024-06-07 16:35:53.393279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:41.119 [2024-06-07 16:35:53.393292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.393938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.393943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.120 [2024-06-07 16:35:53.394010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:41.120 [2024-06-07 16:35:53.394180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.120 [2024-06-07 16:35:53.394185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:35:53.394420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:35:53.394426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.475720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.475758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:36:05.475775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:36:05.475790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:36:05.475821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.475952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.475969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.475984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.475995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:36:05.476333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:36:05.476349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:41.121 [2024-06-07 16:36:05.476612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.476638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.476644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.477522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.477536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:41.121 [2024-06-07 16:36:05.477548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.121 [2024-06-07 16:36:05.477553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:41.121 Received shutdown signal, test time was about 25.593209 seconds 00:27:41.121 00:27:41.121 Latency(us) 00:27:41.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.121 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:41.121 Verification LBA range: start 0x0 length 0x4000 00:27:41.121 Nvme0n1 : 25.59 11023.58 43.06 0.00 0.00 11592.86 413.01 3019898.88 00:27:41.121 =================================================================================================================== 00:27:41.121 Total : 11023.58 43.06 0.00 0.00 11592.86 413.01 3019898.88 00:27:41.121 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.121 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:41.121 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:41.121 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.122 rmmod nvme_tcp 00:27:41.122 rmmod nvme_fabrics 00:27:41.122 rmmod nvme_keyring 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3240719 ']' 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3240719 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 3240719 ']' 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 3240719 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:41.122 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3240719 00:27:41.383 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:41.383 16:36:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3240719' 00:27:41.383 killing process with pid 3240719 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 3240719 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 3240719 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.383 16:36:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.975 16:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.975 00:27:43.975 real 0m39.149s 00:27:43.975 user 1m41.015s 00:27:43.975 sys 0m10.633s 00:27:43.975 16:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:43.975 16:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:43.975 ************************************ 00:27:43.975 END TEST nvmf_host_multipath_status 00:27:43.975 ************************************ 00:27:43.975 16:36:10 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:43.975 16:36:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:43.975 16:36:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:43.975 16:36:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.975 ************************************ 00:27:43.975 START TEST nvmf_discovery_remove_ifc 00:27:43.975 ************************************ 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:43.975 * Looking for test storage... 00:27:43.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.975 16:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:50.562 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:50.562 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:50.562 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:50.562 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.562 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:27:50.823 00:27:50.823 --- 10.0.0.2 ping statistics --- 00:27:50.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.823 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:50.823 00:27:50.823 --- 10.0.0.1 ping statistics --- 00:27:50.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.823 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3251278 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3251278 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 3251278 ']' 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:50.823 16:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.083 [2024-06-07 16:36:17.707201] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:27:51.083 [2024-06-07 16:36:17.707248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.083 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.083 [2024-06-07 16:36:17.787849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.083 [2024-06-07 16:36:17.851018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.083 [2024-06-07 16:36:17.851056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.083 [2024-06-07 16:36:17.851064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.083 [2024-06-07 16:36:17.851070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.083 [2024-06-07 16:36:17.851075] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.083 [2024-06-07 16:36:17.851099] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 [2024-06-07 16:36:18.586168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.026 [2024-06-07 16:36:18.594448] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:52.026 null0 00:27:52.026 [2024-06-07 16:36:18.626370] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3251415 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3251415 /tmp/host.sock 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 3251415 ']' 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:52.026 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:52.026 16:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 [2024-06-07 16:36:18.701488] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:27:52.026 [2024-06-07 16:36:18.701551] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251415 ] 00:27:52.026 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.026 [2024-06-07 16:36:18.764640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.026 [2024-06-07 16:36:18.839482] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.967 16:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.906 [2024-06-07 16:36:20.595564] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:53.906 [2024-06-07 16:36:20.595585] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:53.906 [2024-06-07 16:36:20.595599] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:53.906 [2024-06-07 16:36:20.725999] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:54.165 [2024-06-07 16:36:20.827621] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:54.165 [2024-06-07 16:36:20.827672] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:54.165 [2024-06-07 16:36:20.827693] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:54.165 [2024-06-07 16:36:20.827707] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:54.165 [2024-06-07 16:36:20.827729] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.165 [2024-06-07 16:36:20.834214] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb27ea0 was disconnected and freed. delete nvme_qpair. 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:54.165 16:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.165 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.425 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.425 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:54.425 16:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:55.366 16:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.311 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.571 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:56.571 16:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:57.512 16:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.453 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.453 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.453 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.453 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.454 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.454 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.454 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.454 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:58.454 16:36:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.839 [2024-06-07 16:36:26.268125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:59.839 [2024-06-07 16:36:26.268168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.839 [2024-06-07 16:36:26.268180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.839 [2024-06-07 16:36:26.268190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.839 [2024-06-07 16:36:26.268197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.839 [2024-06-07 16:36:26.268206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.839 [2024-06-07 16:36:26.268213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.839 [2024-06-07 16:36:26.268220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.839 [2024-06-07 16:36:26.268227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.840 [2024-06-07 16:36:26.268235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.840 [2024-06-07 16:36:26.268242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.840 [2024-06-07 16:36:26.268249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaef220 is same with the state(5) to be set 00:27:59.840 [2024-06-07 16:36:26.278145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaef220 (9): Bad file descriptor 00:27:59.840 [2024-06-07 16:36:26.288188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.840 16:36:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:00.822 [2024-06-07 16:36:27.314434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:00.822 [2024-06-07 16:36:27.314480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaef220 with addr=10.0.0.2, port=4420 00:28:00.822 [2024-06-07 16:36:27.314492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaef220 is same with the state(5) to be set 00:28:00.822 [2024-06-07 16:36:27.314516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaef220 (9): Bad file descriptor 00:28:00.822 [2024-06-07 16:36:27.314851] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:00.822 [2024-06-07 16:36:27.314869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:00.822 [2024-06-07 16:36:27.314876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:00.822 [2024-06-07 16:36:27.314884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:00.822 [2024-06-07 16:36:27.314899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:00.822 [2024-06-07 16:36:27.314907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:00.822 16:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.822 16:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:00.822 16:36:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:01.763 [2024-06-07 16:36:28.317289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.763 [2024-06-07 16:36:28.317322] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:01.764 [2024-06-07 16:36:28.317344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.764 [2024-06-07 16:36:28.317353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.764 [2024-06-07 16:36:28.317363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.764 [2024-06-07 16:36:28.317370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.764 [2024-06-07 16:36:28.317378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.764 [2024-06-07 16:36:28.317385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.764 [2024-06-07 16:36:28.317392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.764 [2024-06-07 16:36:28.317399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.764 [2024-06-07 16:36:28.317411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.764 [2024-06-07 16:36:28.317418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.764 [2024-06-07 16:36:28.317425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:01.764 [2024-06-07 16:36:28.317974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaee6b0 (9): Bad file descriptor 00:28:01.764 [2024-06-07 16:36:28.318985] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:01.764 [2024-06-07 16:36:28.318995] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:01.764 16:36:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:02.705 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:02.705 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:02.705 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:02.705 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.705 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:02.705 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:02.965 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:02.965 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.965 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:02.965 16:36:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:03.536 [2024-06-07 16:36:30.329640] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:03.536 [2024-06-07 16:36:30.329657] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:03.536 [2024-06-07 16:36:30.329670] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:03.797 [2024-06-07 16:36:30.417970] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:03.797 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.797 [2024-06-07 16:36:30.642303] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:03.797 [2024-06-07 16:36:30.642345] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:03.797 [2024-06-07 16:36:30.642365] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:03.797 [2024-06-07 16:36:30.642379] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:03.797 [2024-06-07 16:36:30.642387] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:03.797 [2024-06-07 16:36:30.648118] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xafecc0 was disconnected and freed. delete nvme_qpair. 00:28:04.058 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:04.058 16:36:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:04.999 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:04.999 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:04.999 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.999 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:04.999 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3251415 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 3251415 ']' 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 3251415 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3251415 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3251415' 00:28:05.000 killing process with pid 3251415 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 3251415 00:28:05.000 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 3251415 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.261 rmmod nvme_tcp 00:28:05.261 rmmod nvme_fabrics 00:28:05.261 rmmod nvme_keyring 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3251278 ']' 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3251278 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 3251278 ']' 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 3251278 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:05.261 16:36:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3251278 00:28:05.261 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:05.261 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:05.261 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3251278' 00:28:05.261 killing process with pid 3251278 00:28:05.261 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 3251278 00:28:05.261 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 3251278 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.521 16:36:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.432 16:36:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:07.432 00:28:07.432 real 0m23.877s 00:28:07.432 user 0m29.287s 00:28:07.432 sys 0m6.566s 00:28:07.432 16:36:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:07.432 16:36:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.432 ************************************ 00:28:07.432 END TEST nvmf_discovery_remove_ifc 00:28:07.432 ************************************ 00:28:07.432 16:36:34 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:07.432 16:36:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:07.432 16:36:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:07.432 16:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.432 ************************************ 00:28:07.432 START TEST nvmf_identify_kernel_target 00:28:07.432 ************************************ 00:28:07.432 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:07.694 * Looking for test storage... 00:28:07.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.694 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:07.695 16:36:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:14.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:14.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:14.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:14.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.286 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.547 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:28:14.547 00:28:14.547 --- 10.0.0.2 ping statistics --- 00:28:14.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.548 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:28:14.548 00:28:14.548 --- 10.0.0.1 ping statistics --- 00:28:14.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.548 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.548 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # local ip 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 nvmf_port=4420 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:14.808 16:36:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:17.355 Waiting for block devices as requested 00:28:17.615 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:17.615 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:17.615 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:17.875 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:17.875 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:17.875 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:18.135 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:18.135 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:18.135 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:18.394 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:18.394 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:18.394 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:18.655 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:18.655 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:18.655 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:18.655 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:18.915 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:19.177 No valid GPT data, bailing 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # echo SPDK-test 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo 1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo 1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # echo tcp 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # echo 4420 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # echo ipv4 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:19.177 16:36:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:19.177 00:28:19.177 Discovery Log Number of Records 2, Generation counter 2 00:28:19.177 =====Discovery Log Entry 0====== 00:28:19.177 trtype: tcp 00:28:19.177 adrfam: ipv4 00:28:19.177 subtype: current discovery subsystem 00:28:19.177 treq: not specified, sq flow control disable supported 00:28:19.177 portid: 1 00:28:19.177 trsvcid: 4420 00:28:19.177 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:19.177 traddr: 10.0.0.1 00:28:19.177 eflags: none 00:28:19.177 sectype: none 00:28:19.177 =====Discovery Log Entry 1====== 00:28:19.177 trtype: tcp 00:28:19.177 adrfam: ipv4 00:28:19.177 subtype: nvme subsystem 00:28:19.177 treq: not specified, sq flow control disable supported 00:28:19.177 portid: 1 00:28:19.177 trsvcid: 4420 00:28:19.177 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:19.177 traddr: 10.0.0.1 00:28:19.177 eflags: none 00:28:19.177 sectype: none 00:28:19.177 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:19.177 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:19.440 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.440 ===================================================== 00:28:19.440 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:19.440 ===================================================== 00:28:19.440 Controller Capabilities/Features 00:28:19.440 ================================ 00:28:19.440 Vendor ID: 0000 00:28:19.440 Subsystem Vendor ID: 0000 00:28:19.440 Serial Number: f4abc055d903b0a74267 00:28:19.440 Model Number: Linux 00:28:19.440 Firmware Version: 6.7.0-68 00:28:19.440 Recommended Arb Burst: 0 00:28:19.440 IEEE OUI Identifier: 00 00 00 00:28:19.440 Multi-path I/O 00:28:19.440 May have multiple subsystem ports: No 00:28:19.440 May have multiple controllers: No 00:28:19.440 Associated with SR-IOV VF: No 00:28:19.440 Max Data Transfer Size: Unlimited 00:28:19.440 Max Number of Namespaces: 0 00:28:19.440 Max Number of I/O Queues: 1024 00:28:19.440 NVMe Specification Version (VS): 1.3 00:28:19.440 NVMe Specification Version (Identify): 1.3 00:28:19.440 Maximum Queue Entries: 1024 00:28:19.440 Contiguous Queues Required: No 00:28:19.440 Arbitration Mechanisms Supported 00:28:19.440 Weighted Round Robin: Not Supported 00:28:19.440 Vendor Specific: Not Supported 00:28:19.440 Reset Timeout: 7500 ms 00:28:19.440 Doorbell Stride: 4 bytes 00:28:19.440 NVM Subsystem Reset: Not Supported 00:28:19.440 Command Sets Supported 00:28:19.440 NVM Command Set: Supported 00:28:19.440 Boot Partition: Not Supported 00:28:19.440 Memory Page Size Minimum: 4096 bytes 00:28:19.440 Memory Page Size Maximum: 4096 bytes 00:28:19.440 Persistent Memory Region: Not Supported 00:28:19.440 Optional Asynchronous Events Supported 00:28:19.440 Namespace Attribute Notices: Not Supported 00:28:19.440 Firmware Activation Notices: Not Supported 00:28:19.440 ANA Change Notices: Not Supported 00:28:19.440 PLE Aggregate Log Change Notices: Not Supported 00:28:19.440 LBA Status Info Alert Notices: Not Supported 00:28:19.440 EGE Aggregate Log Change Notices: Not Supported 00:28:19.440 Normal NVM Subsystem Shutdown event: Not Supported 00:28:19.440 Zone Descriptor Change Notices: Not Supported 00:28:19.440 Discovery Log Change Notices: Supported 00:28:19.440 Controller Attributes 00:28:19.440 128-bit Host Identifier: Not Supported 00:28:19.440 Non-Operational Permissive Mode: Not Supported 00:28:19.440 NVM Sets: Not Supported 00:28:19.440 Read Recovery Levels: Not Supported 00:28:19.440 Endurance Groups: Not Supported 00:28:19.440 Predictable Latency Mode: Not Supported 00:28:19.440 Traffic Based Keep ALive: Not Supported 00:28:19.440 Namespace Granularity: Not Supported 00:28:19.440 SQ Associations: Not Supported 00:28:19.440 UUID List: Not Supported 00:28:19.440 Multi-Domain Subsystem: Not Supported 00:28:19.440 Fixed Capacity Management: Not Supported 00:28:19.440 Variable Capacity Management: Not Supported 00:28:19.440 Delete Endurance Group: Not Supported 00:28:19.440 Delete NVM Set: Not Supported 00:28:19.440 Extended LBA Formats Supported: Not Supported 00:28:19.440 Flexible Data Placement Supported: Not Supported 00:28:19.440 00:28:19.440 Controller Memory Buffer Support 00:28:19.440 ================================ 00:28:19.440 Supported: No 00:28:19.440 00:28:19.440 Persistent Memory Region Support 00:28:19.440 ================================ 00:28:19.440 Supported: No 00:28:19.440 00:28:19.440 Admin Command Set Attributes 00:28:19.440 ============================ 00:28:19.440 Security Send/Receive: Not Supported 00:28:19.440 Format NVM: Not Supported 00:28:19.440 Firmware Activate/Download: Not Supported 00:28:19.440 Namespace Management: Not Supported 00:28:19.440 Device Self-Test: Not Supported 00:28:19.440 Directives: Not Supported 00:28:19.440 NVMe-MI: Not Supported 00:28:19.440 Virtualization Management: Not Supported 00:28:19.440 Doorbell Buffer Config: Not Supported 00:28:19.440 Get LBA Status Capability: Not Supported 00:28:19.440 Command & Feature Lockdown Capability: Not Supported 00:28:19.440 Abort Command Limit: 1 00:28:19.440 Async Event Request Limit: 1 00:28:19.440 Number of Firmware Slots: N/A 00:28:19.440 Firmware Slot 1 Read-Only: N/A 00:28:19.440 Firmware Activation Without Reset: N/A 00:28:19.440 Multiple Update Detection Support: N/A 00:28:19.440 Firmware Update Granularity: No Information Provided 00:28:19.440 Per-Namespace SMART Log: No 00:28:19.440 Asymmetric Namespace Access Log Page: Not Supported 00:28:19.440 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:19.440 Command Effects Log Page: Not Supported 00:28:19.440 Get Log Page Extended Data: Supported 00:28:19.440 Telemetry Log Pages: Not Supported 00:28:19.440 Persistent Event Log Pages: Not Supported 00:28:19.440 Supported Log Pages Log Page: May Support 00:28:19.440 Commands Supported & Effects Log Page: Not Supported 00:28:19.440 Feature Identifiers & Effects Log Page:May Support 00:28:19.440 NVMe-MI Commands & Effects Log Page: May Support 00:28:19.440 Data Area 4 for Telemetry Log: Not Supported 00:28:19.441 Error Log Page Entries Supported: 1 00:28:19.441 Keep Alive: Not Supported 00:28:19.441 00:28:19.441 NVM Command Set Attributes 00:28:19.441 ========================== 00:28:19.441 Submission Queue Entry Size 00:28:19.441 Max: 1 00:28:19.441 Min: 1 00:28:19.441 Completion Queue Entry Size 00:28:19.441 Max: 1 00:28:19.441 Min: 1 00:28:19.441 Number of Namespaces: 0 00:28:19.441 Compare Command: Not Supported 00:28:19.441 Write Uncorrectable Command: Not Supported 00:28:19.441 Dataset Management Command: Not Supported 00:28:19.441 Write Zeroes Command: Not Supported 00:28:19.441 Set Features Save Field: Not Supported 00:28:19.441 Reservations: Not Supported 00:28:19.441 Timestamp: Not Supported 00:28:19.441 Copy: Not Supported 00:28:19.441 Volatile Write Cache: Not Present 00:28:19.441 Atomic Write Unit (Normal): 1 00:28:19.441 Atomic Write Unit (PFail): 1 00:28:19.441 Atomic Compare & Write Unit: 1 00:28:19.441 Fused Compare & Write: Not Supported 00:28:19.441 Scatter-Gather List 00:28:19.441 SGL Command Set: Supported 00:28:19.441 SGL Keyed: Not Supported 00:28:19.441 SGL Bit Bucket Descriptor: Not Supported 00:28:19.441 SGL Metadata Pointer: Not Supported 00:28:19.441 Oversized SGL: Not Supported 00:28:19.441 SGL Metadata Address: Not Supported 00:28:19.441 SGL Offset: Supported 00:28:19.441 Transport SGL Data Block: Not Supported 00:28:19.441 Replay Protected Memory Block: Not Supported 00:28:19.441 00:28:19.441 Firmware Slot Information 00:28:19.441 ========================= 00:28:19.441 Active slot: 0 00:28:19.441 00:28:19.441 00:28:19.441 Error Log 00:28:19.441 ========= 00:28:19.441 00:28:19.441 Active Namespaces 00:28:19.441 ================= 00:28:19.441 Discovery Log Page 00:28:19.441 ================== 00:28:19.441 Generation Counter: 2 00:28:19.441 Number of Records: 2 00:28:19.441 Record Format: 0 00:28:19.441 00:28:19.441 Discovery Log Entry 0 00:28:19.441 ---------------------- 00:28:19.441 Transport Type: 3 (TCP) 00:28:19.441 Address Family: 1 (IPv4) 00:28:19.441 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:19.441 Entry Flags: 00:28:19.441 Duplicate Returned Information: 0 00:28:19.441 Explicit Persistent Connection Support for Discovery: 0 00:28:19.441 Transport Requirements: 00:28:19.441 Secure Channel: Not Specified 00:28:19.441 Port ID: 1 (0x0001) 00:28:19.441 Controller ID: 65535 (0xffff) 00:28:19.441 Admin Max SQ Size: 32 00:28:19.441 Transport Service Identifier: 4420 00:28:19.441 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:19.441 Transport Address: 10.0.0.1 00:28:19.441 Discovery Log Entry 1 00:28:19.441 ---------------------- 00:28:19.441 Transport Type: 3 (TCP) 00:28:19.441 Address Family: 1 (IPv4) 00:28:19.441 Subsystem Type: 2 (NVM Subsystem) 00:28:19.441 Entry Flags: 00:28:19.441 Duplicate Returned Information: 0 00:28:19.441 Explicit Persistent Connection Support for Discovery: 0 00:28:19.441 Transport Requirements: 00:28:19.441 Secure Channel: Not Specified 00:28:19.441 Port ID: 1 (0x0001) 00:28:19.441 Controller ID: 65535 (0xffff) 00:28:19.441 Admin Max SQ Size: 32 00:28:19.441 Transport Service Identifier: 4420 00:28:19.441 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:19.441 Transport Address: 10.0.0.1 00:28:19.441 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:19.441 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.441 get_feature(0x01) failed 00:28:19.441 get_feature(0x02) failed 00:28:19.441 get_feature(0x04) failed 00:28:19.441 ===================================================== 00:28:19.441 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:19.441 ===================================================== 00:28:19.441 Controller Capabilities/Features 00:28:19.441 ================================ 00:28:19.441 Vendor ID: 0000 00:28:19.441 Subsystem Vendor ID: 0000 00:28:19.441 Serial Number: 72ba1b5d2823caebc552 00:28:19.441 Model Number: SPDK-test 00:28:19.441 Firmware Version: 6.7.0-68 00:28:19.441 Recommended Arb Burst: 6 00:28:19.441 IEEE OUI Identifier: 00 00 00 00:28:19.441 Multi-path I/O 00:28:19.441 May have multiple subsystem ports: Yes 00:28:19.441 May have multiple controllers: Yes 00:28:19.441 Associated with SR-IOV VF: No 00:28:19.441 Max Data Transfer Size: Unlimited 00:28:19.441 Max Number of Namespaces: 1024 00:28:19.441 Max Number of I/O Queues: 128 00:28:19.441 NVMe Specification Version (VS): 1.3 00:28:19.441 NVMe Specification Version (Identify): 1.3 00:28:19.441 Maximum Queue Entries: 1024 00:28:19.441 Contiguous Queues Required: No 00:28:19.441 Arbitration Mechanisms Supported 00:28:19.441 Weighted Round Robin: Not Supported 00:28:19.441 Vendor Specific: Not Supported 00:28:19.441 Reset Timeout: 7500 ms 00:28:19.441 Doorbell Stride: 4 bytes 00:28:19.441 NVM Subsystem Reset: Not Supported 00:28:19.441 Command Sets Supported 00:28:19.441 NVM Command Set: Supported 00:28:19.441 Boot Partition: Not Supported 00:28:19.441 Memory Page Size Minimum: 4096 bytes 00:28:19.441 Memory Page Size Maximum: 4096 bytes 00:28:19.441 Persistent Memory Region: Not Supported 00:28:19.441 Optional Asynchronous Events Supported 00:28:19.441 Namespace Attribute Notices: Supported 00:28:19.441 Firmware Activation Notices: Not Supported 00:28:19.441 ANA Change Notices: Supported 00:28:19.441 PLE Aggregate Log Change Notices: Not Supported 00:28:19.441 LBA Status Info Alert Notices: Not Supported 00:28:19.441 EGE Aggregate Log Change Notices: Not Supported 00:28:19.441 Normal NVM Subsystem Shutdown event: Not Supported 00:28:19.441 Zone Descriptor Change Notices: Not Supported 00:28:19.441 Discovery Log Change Notices: Not Supported 00:28:19.441 Controller Attributes 00:28:19.441 128-bit Host Identifier: Supported 00:28:19.441 Non-Operational Permissive Mode: Not Supported 00:28:19.441 NVM Sets: Not Supported 00:28:19.441 Read Recovery Levels: Not Supported 00:28:19.441 Endurance Groups: Not Supported 00:28:19.441 Predictable Latency Mode: Not Supported 00:28:19.441 Traffic Based Keep ALive: Supported 00:28:19.441 Namespace Granularity: Not Supported 00:28:19.441 SQ Associations: Not Supported 00:28:19.441 UUID List: Not Supported 00:28:19.441 Multi-Domain Subsystem: Not Supported 00:28:19.441 Fixed Capacity Management: Not Supported 00:28:19.441 Variable Capacity Management: Not Supported 00:28:19.441 Delete Endurance Group: Not Supported 00:28:19.441 Delete NVM Set: Not Supported 00:28:19.441 Extended LBA Formats Supported: Not Supported 00:28:19.441 Flexible Data Placement Supported: Not Supported 00:28:19.441 00:28:19.441 Controller Memory Buffer Support 00:28:19.441 ================================ 00:28:19.441 Supported: No 00:28:19.441 00:28:19.441 Persistent Memory Region Support 00:28:19.441 ================================ 00:28:19.441 Supported: No 00:28:19.441 00:28:19.441 Admin Command Set Attributes 00:28:19.441 ============================ 00:28:19.441 Security Send/Receive: Not Supported 00:28:19.441 Format NVM: Not Supported 00:28:19.441 Firmware Activate/Download: Not Supported 00:28:19.442 Namespace Management: Not Supported 00:28:19.442 Device Self-Test: Not Supported 00:28:19.442 Directives: Not Supported 00:28:19.442 NVMe-MI: Not Supported 00:28:19.442 Virtualization Management: Not Supported 00:28:19.442 Doorbell Buffer Config: Not Supported 00:28:19.442 Get LBA Status Capability: Not Supported 00:28:19.442 Command & Feature Lockdown Capability: Not Supported 00:28:19.442 Abort Command Limit: 4 00:28:19.442 Async Event Request Limit: 4 00:28:19.442 Number of Firmware Slots: N/A 00:28:19.442 Firmware Slot 1 Read-Only: N/A 00:28:19.442 Firmware Activation Without Reset: N/A 00:28:19.442 Multiple Update Detection Support: N/A 00:28:19.442 Firmware Update Granularity: No Information Provided 00:28:19.442 Per-Namespace SMART Log: Yes 00:28:19.442 Asymmetric Namespace Access Log Page: Supported 00:28:19.442 ANA Transition Time : 10 sec 00:28:19.442 00:28:19.442 Asymmetric Namespace Access Capabilities 00:28:19.442 ANA Optimized State : Supported 00:28:19.442 ANA Non-Optimized State : Supported 00:28:19.442 ANA Inaccessible State : Supported 00:28:19.442 ANA Persistent Loss State : Supported 00:28:19.442 ANA Change State : Supported 00:28:19.442 ANAGRPID is not changed : No 00:28:19.442 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:19.442 00:28:19.442 ANA Group Identifier Maximum : 128 00:28:19.442 Number of ANA Group Identifiers : 128 00:28:19.442 Max Number of Allowed Namespaces : 1024 00:28:19.442 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:19.442 Command Effects Log Page: Supported 00:28:19.442 Get Log Page Extended Data: Supported 00:28:19.442 Telemetry Log Pages: Not Supported 00:28:19.442 Persistent Event Log Pages: Not Supported 00:28:19.442 Supported Log Pages Log Page: May Support 00:28:19.442 Commands Supported & Effects Log Page: Not Supported 00:28:19.442 Feature Identifiers & Effects Log Page:May Support 00:28:19.442 NVMe-MI Commands & Effects Log Page: May Support 00:28:19.442 Data Area 4 for Telemetry Log: Not Supported 00:28:19.442 Error Log Page Entries Supported: 128 00:28:19.442 Keep Alive: Supported 00:28:19.442 Keep Alive Granularity: 1000 ms 00:28:19.442 00:28:19.442 NVM Command Set Attributes 00:28:19.442 ========================== 00:28:19.442 Submission Queue Entry Size 00:28:19.442 Max: 64 00:28:19.442 Min: 64 00:28:19.442 Completion Queue Entry Size 00:28:19.442 Max: 16 00:28:19.442 Min: 16 00:28:19.442 Number of Namespaces: 1024 00:28:19.442 Compare Command: Not Supported 00:28:19.442 Write Uncorrectable Command: Not Supported 00:28:19.442 Dataset Management Command: Supported 00:28:19.442 Write Zeroes Command: Supported 00:28:19.442 Set Features Save Field: Not Supported 00:28:19.442 Reservations: Not Supported 00:28:19.442 Timestamp: Not Supported 00:28:19.442 Copy: Not Supported 00:28:19.442 Volatile Write Cache: Present 00:28:19.442 Atomic Write Unit (Normal): 1 00:28:19.442 Atomic Write Unit (PFail): 1 00:28:19.442 Atomic Compare & Write Unit: 1 00:28:19.442 Fused Compare & Write: Not Supported 00:28:19.442 Scatter-Gather List 00:28:19.442 SGL Command Set: Supported 00:28:19.442 SGL Keyed: Not Supported 00:28:19.442 SGL Bit Bucket Descriptor: Not Supported 00:28:19.442 SGL Metadata Pointer: Not Supported 00:28:19.442 Oversized SGL: Not Supported 00:28:19.442 SGL Metadata Address: Not Supported 00:28:19.442 SGL Offset: Supported 00:28:19.442 Transport SGL Data Block: Not Supported 00:28:19.442 Replay Protected Memory Block: Not Supported 00:28:19.442 00:28:19.442 Firmware Slot Information 00:28:19.442 ========================= 00:28:19.442 Active slot: 0 00:28:19.442 00:28:19.442 Asymmetric Namespace Access 00:28:19.442 =========================== 00:28:19.442 Change Count : 0 00:28:19.442 Number of ANA Group Descriptors : 1 00:28:19.442 ANA Group Descriptor : 0 00:28:19.442 ANA Group ID : 1 00:28:19.442 Number of NSID Values : 1 00:28:19.442 Change Count : 0 00:28:19.442 ANA State : 1 00:28:19.442 Namespace Identifier : 1 00:28:19.442 00:28:19.442 Commands Supported and Effects 00:28:19.442 ============================== 00:28:19.442 Admin Commands 00:28:19.442 -------------- 00:28:19.442 Get Log Page (02h): Supported 00:28:19.442 Identify (06h): Supported 00:28:19.442 Abort (08h): Supported 00:28:19.442 Set Features (09h): Supported 00:28:19.442 Get Features (0Ah): Supported 00:28:19.442 Asynchronous Event Request (0Ch): Supported 00:28:19.442 Keep Alive (18h): Supported 00:28:19.442 I/O Commands 00:28:19.442 ------------ 00:28:19.442 Flush (00h): Supported 00:28:19.442 Write (01h): Supported LBA-Change 00:28:19.442 Read (02h): Supported 00:28:19.442 Write Zeroes (08h): Supported LBA-Change 00:28:19.442 Dataset Management (09h): Supported 00:28:19.442 00:28:19.442 Error Log 00:28:19.442 ========= 00:28:19.442 Entry: 0 00:28:19.442 Error Count: 0x3 00:28:19.442 Submission Queue Id: 0x0 00:28:19.442 Command Id: 0x5 00:28:19.442 Phase Bit: 0 00:28:19.442 Status Code: 0x2 00:28:19.442 Status Code Type: 0x0 00:28:19.442 Do Not Retry: 1 00:28:19.442 Error Location: 0x28 00:28:19.442 LBA: 0x0 00:28:19.442 Namespace: 0x0 00:28:19.442 Vendor Log Page: 0x0 00:28:19.442 ----------- 00:28:19.442 Entry: 1 00:28:19.442 Error Count: 0x2 00:28:19.442 Submission Queue Id: 0x0 00:28:19.442 Command Id: 0x5 00:28:19.442 Phase Bit: 0 00:28:19.442 Status Code: 0x2 00:28:19.442 Status Code Type: 0x0 00:28:19.442 Do Not Retry: 1 00:28:19.442 Error Location: 0x28 00:28:19.442 LBA: 0x0 00:28:19.442 Namespace: 0x0 00:28:19.442 Vendor Log Page: 0x0 00:28:19.442 ----------- 00:28:19.442 Entry: 2 00:28:19.442 Error Count: 0x1 00:28:19.442 Submission Queue Id: 0x0 00:28:19.442 Command Id: 0x4 00:28:19.442 Phase Bit: 0 00:28:19.442 Status Code: 0x2 00:28:19.442 Status Code Type: 0x0 00:28:19.442 Do Not Retry: 1 00:28:19.442 Error Location: 0x28 00:28:19.442 LBA: 0x0 00:28:19.442 Namespace: 0x0 00:28:19.442 Vendor Log Page: 0x0 00:28:19.442 00:28:19.442 Number of Queues 00:28:19.442 ================ 00:28:19.442 Number of I/O Submission Queues: 128 00:28:19.442 Number of I/O Completion Queues: 128 00:28:19.442 00:28:19.442 ZNS Specific Controller Data 00:28:19.442 ============================ 00:28:19.442 Zone Append Size Limit: 0 00:28:19.442 00:28:19.442 00:28:19.442 Active Namespaces 00:28:19.442 ================= 00:28:19.442 get_feature(0x05) failed 00:28:19.442 Namespace ID:1 00:28:19.442 Command Set Identifier: NVM (00h) 00:28:19.442 Deallocate: Supported 00:28:19.442 Deallocated/Unwritten Error: Not Supported 00:28:19.442 Deallocated Read Value: Unknown 00:28:19.442 Deallocate in Write Zeroes: Not Supported 00:28:19.442 Deallocated Guard Field: 0xFFFF 00:28:19.442 Flush: Supported 00:28:19.442 Reservation: Not Supported 00:28:19.442 Namespace Sharing Capabilities: Multiple Controllers 00:28:19.442 Size (in LBAs): 3750748848 (1788GiB) 00:28:19.442 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:19.442 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:19.442 UUID: 247b6aec-bec5-46f4-ae06-94fa54f8b532 00:28:19.442 Thin Provisioning: Not Supported 00:28:19.442 Per-NS Atomic Units: Yes 00:28:19.442 Atomic Write Unit (Normal): 8 00:28:19.442 Atomic Write Unit (PFail): 8 00:28:19.442 Preferred Write Granularity: 8 00:28:19.442 Atomic Compare & Write Unit: 8 00:28:19.442 Atomic Boundary Size (Normal): 0 00:28:19.442 Atomic Boundary Size (PFail): 0 00:28:19.442 Atomic Boundary Offset: 0 00:28:19.442 NGUID/EUI64 Never Reused: No 00:28:19.442 ANA group ID: 1 00:28:19.442 Namespace Write Protected: No 00:28:19.442 Number of LBA Formats: 1 00:28:19.442 Current LBA Format: LBA Format #00 00:28:19.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:19.442 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:19.442 rmmod nvme_tcp 00:28:19.442 rmmod nvme_fabrics 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:19.442 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.443 16:36:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 0 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:28:21.984 16:36:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:24.574 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:24.574 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:24.859 00:28:24.859 real 0m17.418s 00:28:24.859 user 0m4.235s 00:28:24.859 sys 0m10.002s 00:28:24.859 16:36:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:24.859 16:36:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.859 ************************************ 00:28:24.859 END TEST nvmf_identify_kernel_target 00:28:24.859 ************************************ 00:28:25.121 16:36:51 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:25.121 16:36:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:25.121 16:36:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:25.121 16:36:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.121 ************************************ 00:28:25.121 START TEST nvmf_auth_host 00:28:25.121 ************************************ 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:25.121 * Looking for test storage... 00:28:25.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.121 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.122 16:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:33.260 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:33.261 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:33.261 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:33.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:33.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.261 16:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:28:33.261 00:28:33.261 --- 10.0.0.2 ping statistics --- 00:28:33.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.261 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:33.261 00:28:33.261 --- 10.0.0.1 ping statistics --- 00:28:33.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.261 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3265445 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3265445 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 3265445 ']' 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.261 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=null 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=dabf2d2f84b1205821c70c81d7e153c4 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.IHE 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key dabf2d2f84b1205821c70c81d7e153c4 0 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 dabf2d2f84b1205821c70c81d7e153c4 0 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=dabf2d2f84b1205821c70c81d7e153c4 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=0 00:28:33.262 16:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.IHE 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.IHE 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.IHE 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha512 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=64 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=38180f3526263ffc63d4ea7811d41bb257c5fb33a089cf7740c50f42f9e8aec6 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.nit 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 38180f3526263ffc63d4ea7811d41bb257c5fb33a089cf7740c50f42f9e8aec6 3 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 38180f3526263ffc63d4ea7811d41bb257c5fb33a089cf7740c50f42f9e8aec6 3 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=38180f3526263ffc63d4ea7811d41bb257c5fb33a089cf7740c50f42f9e8aec6 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=3 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.nit 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.nit 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.nit 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=null 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=48 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=6249548d3f83d80be231d9b5e6bb13ef6fb606bf4ea27056 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.bxV 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 6249548d3f83d80be231d9b5e6bb13ef6fb606bf4ea27056 0 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 6249548d3f83d80be231d9b5e6bb13ef6fb606bf4ea27056 0 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=6249548d3f83d80be231d9b5e6bb13ef6fb606bf4ea27056 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=0 00:28:33.262 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.bxV 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.bxV 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bxV 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha384 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=48 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=6cade6e506d8f62c552670f245279205ddf35267e2c55628 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.PLy 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 6cade6e506d8f62c552670f245279205ddf35267e2c55628 2 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 6cade6e506d8f62c552670f245279205ddf35267e2c55628 2 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=6cade6e506d8f62c552670f245279205ddf35267e2c55628 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=2 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.PLy 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.PLy 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.PLy 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha256 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=27b9f6aefb634f33c0fd4297023d3829 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.6Yf 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 27b9f6aefb634f33c0fd4297023d3829 1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 27b9f6aefb634f33c0fd4297023d3829 1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=27b9f6aefb634f33c0fd4297023d3829 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.6Yf 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.6Yf 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6Yf 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha256 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=5c52000832ca1db25dcb09061e9633cb 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha256.XXX 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha256.9Mb 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 5c52000832ca1db25dcb09061e9633cb 1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 5c52000832ca1db25dcb09061e9633cb 1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=5c52000832ca1db25dcb09061e9633cb 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=1 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha256.9Mb 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha256.9Mb 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9Mb 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.524 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha384 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=48 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=5a9ca8d95643a3e8e5fd0ddb6626d6013e411db667c0fae9 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha384.XXX 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha384.zQI 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 5a9ca8d95643a3e8e5fd0ddb6626d6013e411db667c0fae9 2 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 5a9ca8d95643a3e8e5fd0ddb6626d6013e411db667c0fae9 2 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=5a9ca8d95643a3e8e5fd0ddb6626d6013e411db667c0fae9 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=2 00:28:33.525 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha384.zQI 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha384.zQI 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zQI 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.785 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=null 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=32 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=0513403cff7233e6b9e3d31a1f1f9646 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-null.XXX 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-null.UJd 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 0513403cff7233e6b9e3d31a1f1f9646 0 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 0513403cff7233e6b9e3d31a1f1f9646 0 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=0513403cff7233e6b9e3d31a1f1f9646 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=0 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-null.UJd 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-null.UJd 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UJd 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # local digest len file key 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # local -A digests 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=sha512 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # len=64 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@733 -- # key=89765c07b5fc3699182c2cd6574492f7bc10919a5e736c16844474040ca1d68f 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # mktemp -t spdk.key-sha512.XXX 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@734 -- # file=/tmp/spdk.key-sha512.MpB 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@735 -- # format_dhchap_key 89765c07b5fc3699182c2cd6574492f7bc10919a5e736c16844474040ca1d68f 3 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@725 -- # format_key DHHC-1 89765c07b5fc3699182c2cd6574492f7bc10919a5e736c16844474040ca1d68f 3 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@708 -- # local prefix key digest 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # prefix=DHHC-1 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # key=89765c07b5fc3699182c2cd6574492f7bc10919a5e736c16844474040ca1d68f 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@710 -- # digest=3 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@711 -- # python - 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@736 -- # chmod 0600 /tmp/spdk.key-sha512.MpB 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@738 -- # echo /tmp/spdk.key-sha512.MpB 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.MpB 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3265445 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 3265445 ']' 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:33.786 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IHE 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.nit ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nit 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bxV 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.PLy ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PLy 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6Yf 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9Mb ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Mb 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zQI 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UJd ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UJd 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.MpB 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.046 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 nvmf_port=4420 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:34.047 16:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:37.345 Waiting for block devices as requested 00:28:37.345 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:37.345 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:37.345 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:37.607 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:37.607 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:37.607 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:37.607 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:37.867 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:37.867 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:38.129 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:38.129 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:38.129 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:38.389 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:38.389 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:38.389 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:38.389 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:38.649 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:39.591 No valid GPT data, bailing 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@663 -- # echo SPDK-test 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo 1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo 1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # echo tcp 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@678 -- # echo 4420 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@679 -- # echo ipv4 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:39.591 00:28:39.591 Discovery Log Number of Records 2, Generation counter 2 00:28:39.591 =====Discovery Log Entry 0====== 00:28:39.591 trtype: tcp 00:28:39.591 adrfam: ipv4 00:28:39.591 subtype: current discovery subsystem 00:28:39.591 treq: not specified, sq flow control disable supported 00:28:39.591 portid: 1 00:28:39.591 trsvcid: 4420 00:28:39.591 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:39.591 traddr: 10.0.0.1 00:28:39.591 eflags: none 00:28:39.591 sectype: none 00:28:39.591 =====Discovery Log Entry 1====== 00:28:39.591 trtype: tcp 00:28:39.591 adrfam: ipv4 00:28:39.591 subtype: nvme subsystem 00:28:39.591 treq: not specified, sq flow control disable supported 00:28:39.591 portid: 1 00:28:39.591 trsvcid: 4420 00:28:39.591 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:39.591 traddr: 10.0.0.1 00:28:39.591 eflags: none 00:28:39.591 sectype: none 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:39.591 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.592 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.855 nvme0n1 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.855 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.116 nvme0n1 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:40.116 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.117 nvme0n1 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.117 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.378 16:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.378 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.379 nvme0n1 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.379 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.640 nvme0n1 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.640 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.641 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.902 nvme0n1 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.902 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.164 nvme0n1 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.164 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.165 16:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 nvme0n1 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.426 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.687 nvme0n1 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.687 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.688 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.948 nvme0n1 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.948 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.949 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.210 nvme0n1 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.210 16:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.210 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.471 nvme0n1 00:28:42.471 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.471 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.471 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.471 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.471 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.471 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.732 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.992 nvme0n1 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.992 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.993 16:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.252 nvme0n1 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.253 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.513 nvme0n1 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.775 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.036 nvme0n1 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.036 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.037 16:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.607 nvme0n1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.607 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.176 nvme0n1 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.176 16:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.748 nvme0n1 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.748 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.749 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.010 nvme0n1 00:28:46.010 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.010 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.010 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.010 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.010 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.271 16:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.843 nvme0n1 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.843 16:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.487 nvme0n1 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:47.487 16:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.488 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.488 16:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.429 nvme0n1 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.429 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.430 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.001 nvme0n1 00:28:49.001 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.001 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.001 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.001 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.001 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.001 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.262 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.263 16:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.833 nvme0n1 00:28:49.833 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.833 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.833 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.833 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.833 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.833 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.093 16:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.663 nvme0n1 00:28:50.663 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.663 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.663 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.663 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.663 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.663 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.924 nvme0n1 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.924 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.925 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.185 nvme0n1 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.185 16:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.186 16:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.186 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.446 nvme0n1 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:51.446 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.447 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.707 nvme0n1 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.707 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.968 nvme0n1 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.968 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.230 nvme0n1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.230 16:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.491 nvme0n1 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:52.491 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.492 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 nvme0n1 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.752 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.013 nvme0n1 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:53.013 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.014 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 nvme0n1 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.275 16:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.537 nvme0n1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.537 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.797 nvme0n1 00:28:53.797 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.797 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.797 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.797 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.797 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.797 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.058 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.319 nvme0n1 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.319 16:37:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.579 nvme0n1 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.579 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.580 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.839 nvme0n1 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.839 16:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.406 nvme0n1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.406 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.973 nvme0n1 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:55.973 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.974 16:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.544 nvme0n1 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:56.544 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.545 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 nvme0n1 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.115 16:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.684 nvme0n1 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.684 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.685 16:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.620 nvme0n1 00:28:58.620 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.620 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.621 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.192 nvme0n1 00:28:59.192 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.192 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.192 16:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.192 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.192 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.192 16:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.192 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.132 nvme0n1 00:29:00.132 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.132 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.132 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.132 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.133 16:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.703 nvme0n1 00:29:00.703 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.963 16:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.532 nvme0n1 00:29:01.532 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.532 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.532 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.532 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.532 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.792 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.793 nvme0n1 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.793 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.054 nvme0n1 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:02.054 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.055 16:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.316 nvme0n1 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.316 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.577 nvme0n1 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.577 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.838 nvme0n1 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.838 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.098 nvme0n1 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:03.098 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.099 16:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.359 nvme0n1 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.359 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.620 nvme0n1 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.620 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.880 nvme0n1 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:03.880 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.881 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.241 nvme0n1 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.241 16:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.502 nvme0n1 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.502 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.503 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.764 nvme0n1 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.764 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.765 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.025 nvme0n1 00:29:05.026 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.026 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.026 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.026 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.026 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.286 16:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.546 nvme0n1 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:05.546 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:05.547 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:05.547 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.547 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.547 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.807 nvme0n1 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.807 16:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.377 nvme0n1 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:29:06.377 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.378 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.952 nvme0n1 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.952 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.953 16:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 nvme0n1 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.525 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.096 nvme0n1 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.096 16:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.666 nvme0n1 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGFiZjJkMmY4NGIxMjA1ODIxYzcwYzgxZDdlMTUzYzQhjRjH: 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzgxODBmMzUyNjI2M2ZmYzYzZDRlYTc4MTFkNDFiYjI1N2M1ZmIzM2EwODljZjc3NDBjNTBmNDJmOWU4YWVjNo6++vs=: 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.666 16:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.236 nvme0n1 00:29:09.236 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.236 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.236 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.236 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.236 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.236 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.496 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.068 nvme0n1 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjdiOWY2YWVmYjYzNGYzM2MwZmQ0Mjk3MDIzZDM4MjkjlWYU: 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWM1MjAwMDgzMmNhMWRiMjVkY2IwOTA2MWU5NjMzY2IyIUH4: 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.068 16:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.010 nvme0n1 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE5Y2E4ZDk1NjQzYTNlOGU1ZmQwZGRiNjYyNmQ2MDEzZTQxMWRiNjY3YzBmYWU58nv+fA==: 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: ]] 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUxMzQwM2NmZjcyMzNlNmI5ZTNkMzFhMWYxZjk2NDaFGvh7: 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.010 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.011 16:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.950 nvme0n1 00:29:11.950 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk3NjVjMDdiNWZjMzY5OTE4MmMyY2Q2NTc0NDkyZjdiYzEwOTE5YTVlNzM2YzE2ODQ0NDc0MDQwY2ExZDY4ZqAKpkU=: 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.951 16:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.522 nvme0n1 00:29:12.522 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.522 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.522 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.522 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.522 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.522 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI0OTU0OGQzZjgzZDgwYmUyMzFkOWI1ZTZiYjEzZWY2ZmI2MDZiZjRlYTI3MDU2RWRhzg==: 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmNhZGU2ZTUwNmQ4ZjYyYzU1MjY3MGYyNDUyNzkyMDVkZGYzNTI2N2UyYzU1NjI4RXDmeA==: 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:12.783 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.784 request: 00:29:12.784 { 00:29:12.784 "name": "nvme0", 00:29:12.784 "trtype": "tcp", 00:29:12.784 "traddr": "10.0.0.1", 00:29:12.784 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:12.784 "adrfam": "ipv4", 00:29:12.784 "trsvcid": "4420", 00:29:12.784 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:12.784 "method": "bdev_nvme_attach_controller", 00:29:12.784 "req_id": 1 00:29:12.784 } 00:29:12.784 Got JSON-RPC error response 00:29:12.784 response: 00:29:12.784 { 00:29:12.784 "code": -5, 00:29:12.784 "message": "Input/output error" 00:29:12.784 } 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.784 request: 00:29:12.784 { 00:29:12.784 "name": "nvme0", 00:29:12.784 "trtype": "tcp", 00:29:12.784 "traddr": "10.0.0.1", 00:29:12.784 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:12.784 "adrfam": "ipv4", 00:29:12.784 "trsvcid": "4420", 00:29:12.784 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:12.784 "dhchap_key": "key2", 00:29:12.784 "method": "bdev_nvme_attach_controller", 00:29:12.784 "req_id": 1 00:29:12.784 } 00:29:12.784 Got JSON-RPC error response 00:29:12.784 response: 00:29:12.784 { 00:29:12.784 "code": -5, 00:29:12.784 "message": "Input/output error" 00:29:12.784 } 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.784 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # local ip 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates=() 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A ip_candidates 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.045 request: 00:29:13.045 { 00:29:13.045 "name": "nvme0", 00:29:13.045 "trtype": "tcp", 00:29:13.045 "traddr": "10.0.0.1", 00:29:13.045 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:13.045 "adrfam": "ipv4", 00:29:13.045 "trsvcid": "4420", 00:29:13.045 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:13.045 "dhchap_key": "key1", 00:29:13.045 "dhchap_ctrlr_key": "ckey2", 00:29:13.045 "method": "bdev_nvme_attach_controller", 00:29:13.045 "req_id": 1 00:29:13.045 } 00:29:13.045 Got JSON-RPC error response 00:29:13.045 response: 00:29:13.045 { 00:29:13.045 "code": -5, 00:29:13.045 "message": "Input/output error" 00:29:13.045 } 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.045 rmmod nvme_tcp 00:29:13.045 rmmod nvme_fabrics 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3265445 ']' 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3265445 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 3265445 ']' 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 3265445 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3265445 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3265445' 00:29:13.045 killing process with pid 3265445 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 3265445 00:29:13.045 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 3265445 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.306 16:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.216 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.216 16:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:15.216 16:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:15.216 16:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:15.216 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:15.216 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 0 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:29:15.477 16:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:18.779 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:18.779 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:19.038 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:19.297 16:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.IHE /tmp/spdk.key-null.bxV /tmp/spdk.key-sha256.6Yf /tmp/spdk.key-sha384.zQI /tmp/spdk.key-sha512.MpB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:19.297 16:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:22.606 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:22.606 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:22.606 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:22.867 00:29:22.867 real 0m57.928s 00:29:22.867 user 0m51.198s 00:29:22.867 sys 0m14.829s 00:29:22.867 16:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:22.868 16:37:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.868 ************************************ 00:29:22.868 END TEST nvmf_auth_host 00:29:22.868 ************************************ 00:29:23.129 16:37:49 nvmf_tcp -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:29:23.129 16:37:49 nvmf_tcp -- nvmf/nvmf.sh@109 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:23.129 16:37:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:23.129 16:37:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:23.129 16:37:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.129 ************************************ 00:29:23.129 START TEST nvmf_digest 00:29:23.129 ************************************ 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:23.129 * Looking for test storage... 00:29:23.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.129 16:37:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:31.312 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:31.313 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:31.313 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:31.313 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:31.313 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:31.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:29:31.313 00:29:31.313 --- 10.0.0.2 ping statistics --- 00:29:31.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.313 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:29:31.313 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:29:31.314 00:29:31.314 --- 10.0.0.1 ping statistics --- 00:29:31.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.314 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:29:31.314 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.314 16:37:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:31.314 ************************************ 00:29:31.314 START TEST nvmf_digest_clean 00:29:31.314 ************************************ 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3282065 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3282065 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3282065 ']' 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.314 [2024-06-07 16:37:57.127797] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:31.314 [2024-06-07 16:37:57.127850] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.314 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.314 [2024-06-07 16:37:57.196946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.314 [2024-06-07 16:37:57.260388] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.314 [2024-06-07 16:37:57.260430] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.314 [2024-06-07 16:37:57.260438] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.314 [2024-06-07 16:37:57.260444] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.314 [2024-06-07 16:37:57.260450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.314 [2024-06-07 16:37:57.260467] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.314 16:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.314 null0 00:29:31.314 [2024-06-07 16:37:58.039156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.314 [2024-06-07 16:37:58.063337] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3282095 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3282095 /var/tmp/bperf.sock 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3282095 ']' 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:31.314 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:31.314 [2024-06-07 16:37:58.119494] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:31.314 [2024-06-07 16:37:58.119542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282095 ] 00:29:31.314 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.574 [2024-06-07 16:37:58.196846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.574 [2024-06-07 16:37:58.260862] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.145 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:32.145 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:29:32.145 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:32.145 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:32.145 16:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:32.405 16:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.405 16:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.665 nvme0n1 00:29:32.665 16:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:32.665 16:37:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.925 Running I/O for 2 seconds... 00:29:34.836 00:29:34.836 Latency(us) 00:29:34.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.836 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:34.836 nvme0n1 : 2.00 20638.79 80.62 0.00 0.00 6194.05 2949.12 15291.73 00:29:34.836 =================================================================================================================== 00:29:34.836 Total : 20638.79 80.62 0.00 0.00 6194.05 2949.12 15291.73 00:29:34.836 0 00:29:34.836 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:34.836 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:34.836 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:34.836 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:34.836 | select(.opcode=="crc32c") 00:29:34.836 | "\(.module_name) \(.executed)"' 00:29:34.836 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3282095 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3282095 ']' 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3282095 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3282095 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3282095' 00:29:35.097 killing process with pid 3282095 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3282095 00:29:35.097 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.097 00:29:35.097 Latency(us) 00:29:35.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.097 =================================================================================================================== 00:29:35.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3282095 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:35.097 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3282897 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3282897 /var/tmp/bperf.sock 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3282897 ']' 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:35.098 16:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.358 [2024-06-07 16:38:01.970055] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:35.358 [2024-06-07 16:38:01.970121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282897 ] 00:29:35.358 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.358 Zero copy mechanism will not be used. 00:29:35.358 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.358 [2024-06-07 16:38:02.046723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.358 [2024-06-07 16:38:02.110304] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.929 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:35.929 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:29:35.929 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:35.929 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:35.929 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:36.188 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.188 16:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.447 nvme0n1 00:29:36.707 16:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:36.707 16:38:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:36.707 Zero copy mechanism will not be used. 00:29:36.707 Running I/O for 2 seconds... 00:29:38.619 00:29:38.619 Latency(us) 00:29:38.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.619 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:38.619 nvme0n1 : 2.00 2604.01 325.50 0.00 0.00 6141.63 1631.57 8519.68 00:29:38.619 =================================================================================================================== 00:29:38.619 Total : 2604.01 325.50 0.00 0.00 6141.63 1631.57 8519.68 00:29:38.619 0 00:29:38.619 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:38.619 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:38.619 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:38.619 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:38.619 | select(.opcode=="crc32c") 00:29:38.619 | "\(.module_name) \(.executed)"' 00:29:38.619 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3282897 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3282897 ']' 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3282897 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3282897 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:38.879 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3282897' 00:29:38.880 killing process with pid 3282897 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3282897 00:29:38.880 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.880 00:29:38.880 Latency(us) 00:29:38.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.880 =================================================================================================================== 00:29:38.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3282897 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3283703 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3283703 /var/tmp/bperf.sock 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3283703 ']' 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:38.880 16:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:39.139 [2024-06-07 16:38:05.769709] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:39.139 [2024-06-07 16:38:05.769764] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283703 ] 00:29:39.139 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.139 [2024-06-07 16:38:05.844830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.139 [2024-06-07 16:38:05.898084] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.710 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:39.710 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:29:39.710 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:39.710 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:39.710 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:39.971 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.971 16:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.231 nvme0n1 00:29:40.231 16:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:40.231 16:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.491 Running I/O for 2 seconds... 00:29:42.400 00:29:42.400 Latency(us) 00:29:42.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.400 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.400 nvme0n1 : 2.01 21429.88 83.71 0.00 0.00 5961.55 5051.73 11195.73 00:29:42.400 =================================================================================================================== 00:29:42.400 Total : 21429.88 83.71 0.00 0.00 5961.55 5051.73 11195.73 00:29:42.400 0 00:29:42.400 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:42.400 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:42.400 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:42.400 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:42.400 | select(.opcode=="crc32c") 00:29:42.400 | "\(.module_name) \(.executed)"' 00:29:42.400 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3283703 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3283703 ']' 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3283703 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3283703 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3283703' 00:29:42.661 killing process with pid 3283703 00:29:42.661 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3283703 00:29:42.661 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.661 00:29:42.661 Latency(us) 00:29:42.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.661 =================================================================================================================== 00:29:42.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3283703 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3284462 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3284462 /var/tmp/bperf.sock 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3284462 ']' 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:42.662 16:38:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:42.922 [2024-06-07 16:38:09.530184] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:42.922 [2024-06-07 16:38:09.530240] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284462 ] 00:29:42.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.923 Zero copy mechanism will not be used. 00:29:42.923 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.923 [2024-06-07 16:38:09.604772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.923 [2024-06-07 16:38:09.657502] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.492 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:43.492 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:29:43.492 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:43.492 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:43.492 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:43.751 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.751 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.011 nvme0n1 00:29:44.011 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:44.011 16:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.270 Zero copy mechanism will not be used. 00:29:44.270 Running I/O for 2 seconds... 00:29:46.183 00:29:46.183 Latency(us) 00:29:46.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.183 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:46.183 nvme0n1 : 2.00 3210.53 401.32 0.00 0.00 4978.28 2170.88 18568.53 00:29:46.183 =================================================================================================================== 00:29:46.183 Total : 3210.53 401.32 0.00 0.00 4978.28 2170.88 18568.53 00:29:46.183 0 00:29:46.183 16:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:46.183 16:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:46.183 16:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:46.183 16:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:46.183 | select(.opcode=="crc32c") 00:29:46.183 | "\(.module_name) \(.executed)"' 00:29:46.183 16:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3284462 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3284462 ']' 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3284462 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3284462 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3284462' 00:29:46.442 killing process with pid 3284462 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3284462 00:29:46.442 Received shutdown signal, test time was about 2.000000 seconds 00:29:46.442 00:29:46.442 Latency(us) 00:29:46.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.442 =================================================================================================================== 00:29:46.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3284462 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3282065 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3282065 ']' 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3282065 00:29:46.442 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3282065 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3282065' 00:29:46.702 killing process with pid 3282065 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3282065 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3282065 00:29:46.702 00:29:46.702 real 0m16.414s 00:29:46.702 user 0m32.255s 00:29:46.702 sys 0m3.212s 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:46.702 ************************************ 00:29:46.702 END TEST nvmf_digest_clean 00:29:46.702 ************************************ 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:46.702 16:38:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:46.962 ************************************ 00:29:46.962 START TEST nvmf_digest_error 00:29:46.962 ************************************ 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3285175 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3285175 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3285175 ']' 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:46.962 16:38:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.962 [2024-06-07 16:38:13.622543] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:46.962 [2024-06-07 16:38:13.622594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.962 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.962 [2024-06-07 16:38:13.688889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.962 [2024-06-07 16:38:13.757367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.962 [2024-06-07 16:38:13.757409] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.962 [2024-06-07 16:38:13.757417] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.962 [2024-06-07 16:38:13.757423] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.962 [2024-06-07 16:38:13.757429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.962 [2024-06-07 16:38:13.757447] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.532 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:47.532 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:29:47.532 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.532 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:47.532 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.793 [2024-06-07 16:38:14.427382] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.793 null0 00:29:47.793 [2024-06-07 16:38:14.508106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.793 [2024-06-07 16:38:14.532290] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3285427 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3285427 /var/tmp/bperf.sock 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3285427 ']' 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:47.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:47.793 16:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.793 [2024-06-07 16:38:14.586433] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:47.793 [2024-06-07 16:38:14.586481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285427 ] 00:29:47.793 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.054 [2024-06-07 16:38:14.660436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.054 [2024-06-07 16:38:14.714425] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.624 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:48.624 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:29:48.624 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:48.624 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:48.884 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:48.884 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.884 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.884 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.884 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.884 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.147 nvme0n1 00:29:49.147 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:49.147 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.147 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.147 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:49.147 16:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:49.147 Running I/O for 2 seconds... 00:29:49.147 [2024-06-07 16:38:15.888811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.888846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.888855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.902147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.902168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.902175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.914837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.914856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.914863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.926944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.926963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.926970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.940366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.940385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.940392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.952164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.952182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.952189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.965191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.965209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.965216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.976714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.976733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.976740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.147 [2024-06-07 16:38:15.988655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.147 [2024-06-07 16:38:15.988673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.147 [2024-06-07 16:38:15.988679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.001822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.001840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.001847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.013163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.013181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.013187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.025869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.025886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.025893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.038125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.038143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.038149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.049755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.049772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.049779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.062276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.062293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.062300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.074781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.074798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.074805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.085571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.085588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.085595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.098361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.098379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.098390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.110528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.110545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.110552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.123507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.123525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.123531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.135782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.135799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.135806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.147805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.147822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.147829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.161272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.161289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.161296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.172205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.172222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.172229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.185287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.434 [2024-06-07 16:38:16.185305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.434 [2024-06-07 16:38:16.185312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.434 [2024-06-07 16:38:16.196671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.196689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.196696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.435 [2024-06-07 16:38:16.208650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.208672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.208679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.435 [2024-06-07 16:38:16.220872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.220891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.220897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.435 [2024-06-07 16:38:16.233875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.233894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.233900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.435 [2024-06-07 16:38:16.245442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.245460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.245467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.435 [2024-06-07 16:38:16.258368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.258387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.258393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.435 [2024-06-07 16:38:16.270685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.435 [2024-06-07 16:38:16.270704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.435 [2024-06-07 16:38:16.270710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.283081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.283099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.283106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.294140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.294157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.294164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.306910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.306927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.306934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.319854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.319872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.319878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.330616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.330634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.330640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.343562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.343580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.343586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.356255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.356272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.356279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.368378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.368395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.368406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.379802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.379820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.379826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.392950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.392968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.392974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.404100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.404117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.404124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.416640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.416667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.429362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.429379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.429386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.441493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.441511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.441517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.453837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.453854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.453861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.465016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.465034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.465041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.477485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.477502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.477509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.490326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.490344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.490350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.502962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.502980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.502987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.515806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.706 [2024-06-07 16:38:16.515823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.706 [2024-06-07 16:38:16.515830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.706 [2024-06-07 16:38:16.527345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.707 [2024-06-07 16:38:16.527363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.707 [2024-06-07 16:38:16.527369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.707 [2024-06-07 16:38:16.539820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.707 [2024-06-07 16:38:16.539837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.707 [2024-06-07 16:38:16.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.707 [2024-06-07 16:38:16.551414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.707 [2024-06-07 16:38:16.551431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.707 [2024-06-07 16:38:16.551438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.967 [2024-06-07 16:38:16.563752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.967 [2024-06-07 16:38:16.563770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.967 [2024-06-07 16:38:16.563777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.967 [2024-06-07 16:38:16.575719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.967 [2024-06-07 16:38:16.575736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.967 [2024-06-07 16:38:16.575743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.967 [2024-06-07 16:38:16.587372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.967 [2024-06-07 16:38:16.587389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.967 [2024-06-07 16:38:16.587396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.967 [2024-06-07 16:38:16.601585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.967 [2024-06-07 16:38:16.601602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.967 [2024-06-07 16:38:16.601609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.967 [2024-06-07 16:38:16.614206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.967 [2024-06-07 16:38:16.614222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.967 [2024-06-07 16:38:16.614229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.967 [2024-06-07 16:38:16.625936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.625953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.625963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.638677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.638694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.638700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.649821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.649838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.649845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.662396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.662415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.662422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.674052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.674076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.686699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.686716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.686723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.699435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.699452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.699459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.710877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.710893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.710900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.723342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.723359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.723366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.735048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.735068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.735075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.747898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.747915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.747922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.760526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.760543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.760550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.773278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.773296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.773303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.784963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.784982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.784988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.797186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.797203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.797209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.968 [2024-06-07 16:38:16.809407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:49.968 [2024-06-07 16:38:16.809424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.968 [2024-06-07 16:38:16.809430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.820941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.820959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.820965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.833880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.833897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.833903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.845891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.845908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.845915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.857373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.857389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.857396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.870420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.870437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.870443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.883440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.883464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.895227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.895244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.895250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.907677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.907701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.920234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.920251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.920258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.931566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.931583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.931590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.229 [2024-06-07 16:38:16.943982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.229 [2024-06-07 16:38:16.943999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.229 [2024-06-07 16:38:16.944008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:16.956495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:16.956512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:16.956518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:16.968345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:16.968362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:16.968368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:16.981376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:16.981393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:16.981400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:16.994170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:16.994187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:16.994194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.006169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.006187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.006193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.017843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.017861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.017868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.030118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.030135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.030142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.042796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.042814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.042820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.054542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.054559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.054566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.066197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.066214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.066221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.230 [2024-06-07 16:38:17.079339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.230 [2024-06-07 16:38:17.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.230 [2024-06-07 16:38:17.079364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.091056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.091073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.091080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.103798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.103815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.103821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.114879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.114896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.114903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.127741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.127758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.127764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.139047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.139064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.139071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.151609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.151639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.163522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.163539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.163546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.176058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.176074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.176081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.188941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.188958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.188965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.200805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.200823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.200830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.213498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.213515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.213521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.226198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.226215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.226222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.238377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.238394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.238403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.249215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.249231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.249238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.262323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.262343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.262349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.273509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.492 [2024-06-07 16:38:17.273526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.492 [2024-06-07 16:38:17.273533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.492 [2024-06-07 16:38:17.286274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.493 [2024-06-07 16:38:17.286291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.493 [2024-06-07 16:38:17.286298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.493 [2024-06-07 16:38:17.299998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.493 [2024-06-07 16:38:17.300015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.493 [2024-06-07 16:38:17.300021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.493 [2024-06-07 16:38:17.311099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.493 [2024-06-07 16:38:17.311116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.493 [2024-06-07 16:38:17.311123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.493 [2024-06-07 16:38:17.322737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.493 [2024-06-07 16:38:17.322754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.493 [2024-06-07 16:38:17.322760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.493 [2024-06-07 16:38:17.335395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.493 [2024-06-07 16:38:17.335416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.493 [2024-06-07 16:38:17.335423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.348568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.348586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.348592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.360576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.360594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.360600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.374233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.374250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.374258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.383665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.383681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.383688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.397044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.397060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.397066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.409276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.409292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.409299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.421755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.421772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.421778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.433857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.433874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.433880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.446242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.446259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.446265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.458777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.458794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.458800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.471331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.471348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.471358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.482508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.482525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.482532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.495999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.496016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.496023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.507609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.507626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.507633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.519937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.519954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.519961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.533099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.533116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.533123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.544287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.544305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.544312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.556516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.556533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.556540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.567821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.567839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.567845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.581328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.581346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.581353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.592941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.592959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.592966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.755 [2024-06-07 16:38:17.605965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:50.755 [2024-06-07 16:38:17.605983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.755 [2024-06-07 16:38:17.605989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.016 [2024-06-07 16:38:17.616861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.016 [2024-06-07 16:38:17.616879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.016 [2024-06-07 16:38:17.616886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.016 [2024-06-07 16:38:17.629946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.016 [2024-06-07 16:38:17.629963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.016 [2024-06-07 16:38:17.629970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.016 [2024-06-07 16:38:17.642097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.016 [2024-06-07 16:38:17.642115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.016 [2024-06-07 16:38:17.642122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.016 [2024-06-07 16:38:17.653582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.016 [2024-06-07 16:38:17.653599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.653606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.666328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.666346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.666352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.679175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.679193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.679203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.690487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.690505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.690511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.702466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.702484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.702490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.715197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.715221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.728214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.728232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.728238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.739144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.739161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.739168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.752068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.752086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.752092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.764571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.764589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.764596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.777374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.777392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.777399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.789573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.789594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.789600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.802447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.802465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.802472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.815022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.815040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.815046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.824958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.824975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.824982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.836726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.836744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.836750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.849214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.849231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.849238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.017 [2024-06-07 16:38:17.862883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c504a0) 00:29:51.017 [2024-06-07 16:38:17.862901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.017 [2024-06-07 16:38:17.862908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.277 00:29:51.277 Latency(us) 00:29:51.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.277 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:51.277 nvme0n1 : 2.00 20737.78 81.01 0.00 0.00 6165.77 3181.23 19988.48 00:29:51.277 =================================================================================================================== 00:29:51.277 Total : 20737.78 81.01 0.00 0.00 6165.77 3181.23 19988.48 00:29:51.277 0 00:29:51.277 16:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:51.277 16:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:51.278 16:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:51.278 16:38:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:51.278 | .driver_specific 00:29:51.278 | .nvme_error 00:29:51.278 | .status_code 00:29:51.278 | .command_transient_transport_error' 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3285427 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3285427 ']' 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3285427 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3285427 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3285427' 00:29:51.278 killing process with pid 3285427 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3285427 00:29:51.278 Received shutdown signal, test time was about 2.000000 seconds 00:29:51.278 00:29:51.278 Latency(us) 00:29:51.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.278 =================================================================================================================== 00:29:51.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.278 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3285427 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3286178 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3286178 /var/tmp/bperf.sock 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3286178 ']' 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:51.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:51.538 16:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.538 [2024-06-07 16:38:18.271633] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:51.538 [2024-06-07 16:38:18.271689] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286178 ] 00:29:51.538 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:51.538 Zero copy mechanism will not be used. 00:29:51.538 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.538 [2024-06-07 16:38:18.346605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.798 [2024-06-07 16:38:18.400304] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.367 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.939 nvme0n1 00:29:52.939 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:52.939 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.939 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.939 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.939 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:52.939 16:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:52.939 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:52.939 Zero copy mechanism will not be used. 00:29:52.939 Running I/O for 2 seconds... 00:29:52.939 [2024-06-07 16:38:19.636494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.636526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.636534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.649031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.649052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.649059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.661128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.661148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.661160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.674250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.674270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.674277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.684287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.684306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.684312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.696610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.696628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.696635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.706926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.706945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.706952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.718148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.718167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.718174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.729886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.939 [2024-06-07 16:38:19.729911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.939 [2024-06-07 16:38:19.743132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.939 [2024-06-07 16:38:19.743150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.940 [2024-06-07 16:38:19.743156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.940 [2024-06-07 16:38:19.757721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.940 [2024-06-07 16:38:19.757739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.940 [2024-06-07 16:38:19.757746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.940 [2024-06-07 16:38:19.771841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.940 [2024-06-07 16:38:19.771864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.940 [2024-06-07 16:38:19.771870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.940 [2024-06-07 16:38:19.784411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:52.940 [2024-06-07 16:38:19.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.940 [2024-06-07 16:38:19.784436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.798647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.798666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.201 [2024-06-07 16:38:19.798672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.811394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.811418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.201 [2024-06-07 16:38:19.811425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.823270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.823289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.201 [2024-06-07 16:38:19.823295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.837326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.837345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.201 [2024-06-07 16:38:19.837351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.849982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.850001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.201 [2024-06-07 16:38:19.850007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.860410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.860429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.201 [2024-06-07 16:38:19.860435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.201 [2024-06-07 16:38:19.872862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.201 [2024-06-07 16:38:19.872881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.872887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.885941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.885959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.885966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.899981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.900000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.900006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.913291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.913315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.925591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.925608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.925615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.937545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.937564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.937570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.948782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.948801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.960956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.960974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.960980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.972841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.972859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.972866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.985908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.985927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.985937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:19.998969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:19.998988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:19.998994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:20.012217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:20.012237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:20.012244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:20.025103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:20.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:20.025130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:20.040399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:20.040423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:20.040430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.202 [2024-06-07 16:38:20.053490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.202 [2024-06-07 16:38:20.053508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.202 [2024-06-07 16:38:20.053514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.064846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.064864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.064871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.074914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.074937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.082624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.082643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.082651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.090271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.090294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.090300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.099030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.099047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.099053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.109756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.109774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.109780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.119457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.119474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.119480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.129000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.129018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.129024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.138637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.138655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.138661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.150126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.464 [2024-06-07 16:38:20.150144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.464 [2024-06-07 16:38:20.150150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.464 [2024-06-07 16:38:20.161472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.161490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.161497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.171081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.171099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.171109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.184390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.184412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.184419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.197381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.197399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.197410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.210729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.210746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.210752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.222675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.222692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.222698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.234817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.234835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.234841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.247340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.247358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.247365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.260559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.260576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.260583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.271413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.271431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.271437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.285774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.285795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.285802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.465 [2024-06-07 16:38:20.301592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.465 [2024-06-07 16:38:20.301609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.465 [2024-06-07 16:38:20.301616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.319500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.319519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.327922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.327940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.327947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.341724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.341743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.341749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.356725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.356743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.356750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.371008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.371026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.371032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.386178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.386196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.386203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.399538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.399557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.399563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.412669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.412688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.412694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.424392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.424414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.424421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.435779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.726 [2024-06-07 16:38:20.435799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.726 [2024-06-07 16:38:20.435807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.726 [2024-06-07 16:38:20.445969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.445987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.445993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.454836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.454855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.454861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.465471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.465488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.465495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.474615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.474633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.474640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.485216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.485235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.485241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.497260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.497278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.497288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.510464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.510482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.510489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.521301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.521320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.521326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.534310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.534328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.534334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.544648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.544667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.544674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.556442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.556460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.556466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.727 [2024-06-07 16:38:20.567693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.727 [2024-06-07 16:38:20.567712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.727 [2024-06-07 16:38:20.567718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.987 [2024-06-07 16:38:20.580572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.987 [2024-06-07 16:38:20.580591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-06-07 16:38:20.580598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.987 [2024-06-07 16:38:20.592853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.592871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.592877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.604425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.604447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.604453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.615123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.615141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.615147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.626471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.626490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.626496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.637675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.637694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.637700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.648712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.648731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.648738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.660186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.660205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.660211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.672623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.672642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.672648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.686379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.686398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.686407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.701282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.701300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.701307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.715538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.715556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.715562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.729527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.729546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.729552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.742733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.742757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.755038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.755056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.767765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.767782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.767788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.781293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.781312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.781318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.795312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.795330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.795337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.809870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.809888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.809895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.824092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.824110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.824120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.988 [2024-06-07 16:38:20.834816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:53.988 [2024-06-07 16:38:20.834835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-06-07 16:38:20.834841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.248 [2024-06-07 16:38:20.845587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.248 [2024-06-07 16:38:20.845606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.248 [2024-06-07 16:38:20.845612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.248 [2024-06-07 16:38:20.857686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.248 [2024-06-07 16:38:20.857705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.248 [2024-06-07 16:38:20.857712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.248 [2024-06-07 16:38:20.869343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.248 [2024-06-07 16:38:20.869362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.248 [2024-06-07 16:38:20.869369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.248 [2024-06-07 16:38:20.880945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.880964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.880971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.892213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.892231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.892238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.903574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.903593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.903599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.915658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.915677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.915683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.926590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.926608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.926615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.936295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.936313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.936319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.948089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.948107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.948113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.960831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.960849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.960856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.973658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.973677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.973683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.987694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.987713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.987719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:20.999914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:20.999931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:20.999938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.013218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.013236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.013243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.027199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.027218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.027227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.040936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.040955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.040961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.054251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.054269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.068564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.068583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.068589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.080617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.080635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.080642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.249 [2024-06-07 16:38:21.092637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.249 [2024-06-07 16:38:21.092656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.249 [2024-06-07 16:38:21.092662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.102931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.102949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.102956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.113250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.113269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.113277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.123741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.123760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.123769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.136155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.136177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.136184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.147408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.147427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.147434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.159909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.159928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.159934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.170867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.170892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.183045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.183064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.183070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.195117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.195136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.195142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.206973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.206991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.206998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.221246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.221265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.221272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.234890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.509 [2024-06-07 16:38:21.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.509 [2024-06-07 16:38:21.234915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.509 [2024-06-07 16:38:21.247410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.247429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.247435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.259354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.259373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.259380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.269597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.269616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.269622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.281287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.281305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.281312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.292576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.292595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.292602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.303841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.303860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.303867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.315991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.316010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.316016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.327645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.327664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.327670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.340413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.340431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.340441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.510 [2024-06-07 16:38:21.351581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.510 [2024-06-07 16:38:21.351599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.510 [2024-06-07 16:38:21.351605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.364001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.364022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.364028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.374981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.375000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.375006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.387816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.387834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.387841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.397974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.397993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.397999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.408869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.408888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.408894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.420172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.420190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.420197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.434001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.434020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.434026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.445243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.445264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.445271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.457987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.458005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.458012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.470094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.470113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.470119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.481789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.481807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.481815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.493992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.494010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.494016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.505452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.505471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.505477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.770 [2024-06-07 16:38:21.515983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.770 [2024-06-07 16:38:21.516002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.770 [2024-06-07 16:38:21.516008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.526845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.526864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.526871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.538468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.538486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.538492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.549116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.549135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.549141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.561018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.561037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.561043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.571914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.571933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.571940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.584157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.584176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.584183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.597625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.597643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.597650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.609886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.609905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.609911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.771 [2024-06-07 16:38:21.620456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1722400) 00:29:54.771 [2024-06-07 16:38:21.620475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.771 [2024-06-07 16:38:21.620482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.031 00:29:55.031 Latency(us) 00:29:55.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.031 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:55.031 nvme0n1 : 2.00 2571.51 321.44 0.00 0.00 6219.65 1303.89 18022.40 00:29:55.031 =================================================================================================================== 00:29:55.031 Total : 2571.51 321.44 0.00 0.00 6219.65 1303.89 18022.40 00:29:55.031 0 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:55.031 | .driver_specific 00:29:55.031 | .nvme_error 00:29:55.031 | .status_code 00:29:55.031 | .command_transient_transport_error' 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3286178 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3286178 ']' 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3286178 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3286178 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3286178' 00:29:55.031 killing process with pid 3286178 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3286178 00:29:55.031 Received shutdown signal, test time was about 2.000000 seconds 00:29:55.031 00:29:55.031 Latency(us) 00:29:55.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.031 =================================================================================================================== 00:29:55.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:55.031 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3286178 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3286894 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3286894 /var/tmp/bperf.sock 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3286894 ']' 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:55.292 16:38:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.292 [2024-06-07 16:38:22.022994] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:55.292 [2024-06-07 16:38:22.023049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286894 ] 00:29:55.292 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.292 [2024-06-07 16:38:22.098518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.552 [2024-06-07 16:38:22.151310] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.122 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.382 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.382 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.382 16:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.382 nvme0n1 00:29:56.642 16:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:56.642 16:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.642 16:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.642 16:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.642 16:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:56.642 16:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:56.642 Running I/O for 2 seconds... 00:29:56.642 [2024-06-07 16:38:23.357991] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190de038 00:29:56.642 [2024-06-07 16:38:23.358756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.358781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.374006] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6300 00:29:56.642 [2024-06-07 16:38:23.376005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.376025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.385652] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fa7d8 00:29:56.642 [2024-06-07 16:38:23.387475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.387497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.396975] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f2948 00:29:56.642 [2024-06-07 16:38:23.398606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.398625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.408378] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ff3c8 00:29:56.642 [2024-06-07 16:38:23.409864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.409881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.419697] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eff18 00:29:56.642 [2024-06-07 16:38:23.420983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.431075] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0ff8 00:29:56.642 [2024-06-07 16:38:23.432228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.432244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.442395] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6fa8 00:29:56.642 [2024-06-07 16:38:23.443352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.443368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.453787] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e99d8 00:29:56.642 [2024-06-07 16:38:23.454600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.454616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.467816] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fbcf0 00:29:56.642 [2024-06-07 16:38:23.468873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.468890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.479140] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e7818 00:29:56.642 [2024-06-07 16:38:23.479936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.479953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:56.642 [2024-06-07 16:38:23.490498] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0788 00:29:56.642 [2024-06-07 16:38:23.491213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.642 [2024-06-07 16:38:23.491232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:56.902 [2024-06-07 16:38:23.504090] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e8088 00:29:56.902 [2024-06-07 16:38:23.505617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.902 [2024-06-07 16:38:23.505633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:56.902 [2024-06-07 16:38:23.515610] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f20d8 00:29:56.902 [2024-06-07 16:38:23.516955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.902 [2024-06-07 16:38:23.516971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:56.902 [2024-06-07 16:38:23.526977] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ea680 00:29:56.903 [2024-06-07 16:38:23.528164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.528179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.538284] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fda78 00:29:56.903 [2024-06-07 16:38:23.539295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.539311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.549659] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e1b48 00:29:56.903 [2024-06-07 16:38:23.550514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.550529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.564625] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0ff8 00:29:56.903 [2024-06-07 16:38:23.566547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.575938] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb8b8 00:29:56.903 [2024-06-07 16:38:23.577653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.577669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.587301] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190efae0 00:29:56.903 [2024-06-07 16:38:23.588877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.588893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.598600] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f8e88 00:29:56.903 [2024-06-07 16:38:23.599973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.599989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.609959] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fac10 00:29:56.903 [2024-06-07 16:38:23.611192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.611208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.621255] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f46d0 00:29:56.903 [2024-06-07 16:38:23.622295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.622311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.632761] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190de8a8 00:29:56.903 [2024-06-07 16:38:23.633659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.633674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.646763] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e0ea0 00:29:56.903 [2024-06-07 16:38:23.647893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.647909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.658582] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fa3a0 00:29:56.903 [2024-06-07 16:38:23.659976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.659992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.669930] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7970 00:29:56.903 [2024-06-07 16:38:23.671180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.671195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.681223] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0bc0 00:29:56.903 [2024-06-07 16:38:23.682281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.682296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.692005] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e1f80 00:29:56.903 [2024-06-07 16:38:23.692928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.692943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.703695] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fc128 00:29:56.903 [2024-06-07 16:38:23.704464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.704479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.717720] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ddc00 00:29:56.903 [2024-06-07 16:38:23.718712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.718728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.729029] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e0a68 00:29:56.903 [2024-06-07 16:38:23.729778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.729794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.740366] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e1b48 00:29:56.903 [2024-06-07 16:38:23.741025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.741040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:56.903 [2024-06-07 16:38:23.753939] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0350 00:29:56.903 [2024-06-07 16:38:23.755419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.903 [2024-06-07 16:38:23.755434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.765258] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f4b08 00:29:57.163 [2024-06-07 16:38:23.766560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.766576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.776630] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6fa8 00:29:57.163 [2024-06-07 16:38:23.777771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.777787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.787914] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb048 00:29:57.163 [2024-06-07 16:38:23.788880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.788895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.798693] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fd640 00:29:57.163 [2024-06-07 16:38:23.799503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.799521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.811275] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0bc0 00:29:57.163 [2024-06-07 16:38:23.812069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.812084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.824631] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e3d08 00:29:57.163 [2024-06-07 16:38:23.826087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.826102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.834178] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e88f8 00:29:57.163 [2024-06-07 16:38:23.834951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.834966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.848537] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fbcf0 00:29:57.163 [2024-06-07 16:38:23.849483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.163 [2024-06-07 16:38:23.849499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:57.163 [2024-06-07 16:38:23.859920] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fcdd0 00:29:57.164 [2024-06-07 16:38:23.860767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.860782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.871235] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0350 00:29:57.164 [2024-06-07 16:38:23.871835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.871850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.884786] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f6020 00:29:57.164 [2024-06-07 16:38:23.886255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.886271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.895560] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e0630 00:29:57.164 [2024-06-07 16:38:23.896892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.896907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.907147] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fc128 00:29:57.164 [2024-06-07 16:38:23.908350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.908365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.918459] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e99d8 00:29:57.164 [2024-06-07 16:38:23.919444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.919459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.929850] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190edd58 00:29:57.164 [2024-06-07 16:38:23.930698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.930714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.943898] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e2c28 00:29:57.164 [2024-06-07 16:38:23.944970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.944986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.955215] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e5220 00:29:57.164 [2024-06-07 16:38:23.956041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.956056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.966598] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb048 00:29:57.164 [2024-06-07 16:38:23.967329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.967344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.980184] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb8b8 00:29:57.164 [2024-06-07 16:38:23.981747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.981762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:23.991470] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190dece0 00:29:57.164 [2024-06-07 16:38:23.992836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:23.992851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:24.002817] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0ff8 00:29:57.164 [2024-06-07 16:38:24.004039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:24.004054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:57.164 [2024-06-07 16:38:24.014100] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7100 00:29:57.164 [2024-06-07 16:38:24.015143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.164 [2024-06-07 16:38:24.015159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.025473] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e38d0 00:29:57.424 [2024-06-07 16:38:24.026355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.026370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.040420] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f1430 00:29:57.424 [2024-06-07 16:38:24.042356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.042371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.051717] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ebb98 00:29:57.424 [2024-06-07 16:38:24.053456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.053471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.063068] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e01f8 00:29:57.424 [2024-06-07 16:38:24.064669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.064685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.074361] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fd208 00:29:57.424 [2024-06-07 16:38:24.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.075789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.085749] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e27f0 00:29:57.424 [2024-06-07 16:38:24.087012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.087027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.097033] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ef6a8 00:29:57.424 [2024-06-07 16:38:24.098113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.098128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.108408] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fda78 00:29:57.424 [2024-06-07 16:38:24.109335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.109353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.119718] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6738 00:29:57.424 [2024-06-07 16:38:24.120449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.120465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.133781] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e3498 00:29:57.424 [2024-06-07 16:38:24.134690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.134706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.145120] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fc560 00:29:57.424 [2024-06-07 16:38:24.145941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.145956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.156431] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6fa8 00:29:57.424 [2024-06-07 16:38:24.157020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.157035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.169959] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190edd58 00:29:57.424 [2024-06-07 16:38:24.171425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.171440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.181391] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190dece0 00:29:57.424 [2024-06-07 16:38:24.182701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.182716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.192681] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eff18 00:29:57.424 [2024-06-07 16:38:24.193798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.193813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.206221] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb048 00:29:57.424 [2024-06-07 16:38:24.208008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.208023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.218014] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e49b0 00:29:57.424 [2024-06-07 16:38:24.219816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.219832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.229751] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e23b8 00:29:57.424 [2024-06-07 16:38:24.231396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.231421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.241060] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fc128 00:29:57.424 [2024-06-07 16:38:24.242506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.242521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.252411] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7538 00:29:57.424 [2024-06-07 16:38:24.253707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.253722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:57.424 [2024-06-07 16:38:24.263734] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fc560 00:29:57.424 [2024-06-07 16:38:24.264835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.424 [2024-06-07 16:38:24.264850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:57.425 [2024-06-07 16:38:24.275109] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e23b8 00:29:57.425 [2024-06-07 16:38:24.276073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.425 [2024-06-07 16:38:24.276088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.286405] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fac10 00:29:57.685 [2024-06-07 16:38:24.287170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.287185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.300454] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ecc78 00:29:57.685 [2024-06-07 16:38:24.301393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.301412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.311795] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fac10 00:29:57.685 [2024-06-07 16:38:24.312666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.312681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.324797] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f3e60 00:29:57.685 [2024-06-07 16:38:24.326462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.326477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.334364] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fbcf0 00:29:57.685 [2024-06-07 16:38:24.335323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.335338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.346956] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0ff8 00:29:57.685 [2024-06-07 16:38:24.347641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.347657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.360511] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ea680 00:29:57.685 [2024-06-07 16:38:24.362019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.362035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.371800] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f4f40 00:29:57.685 [2024-06-07 16:38:24.373124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.373138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.383154] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190df550 00:29:57.685 [2024-06-07 16:38:24.384319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.384334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.394438] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6300 00:29:57.685 [2024-06-07 16:38:24.395422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.395437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.405784] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e3d08 00:29:57.685 [2024-06-07 16:38:24.406614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.406629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.420729] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e9168 00:29:57.685 [2024-06-07 16:38:24.422609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.422627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.432022] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f96f8 00:29:57.685 [2024-06-07 16:38:24.433712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.433727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.443361] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f35f0 00:29:57.685 [2024-06-07 16:38:24.444913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.444928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.454667] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f9f68 00:29:57.685 [2024-06-07 16:38:24.456018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.685 [2024-06-07 16:38:24.456033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:57.685 [2024-06-07 16:38:24.466034] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f1ca0 00:29:57.685 [2024-06-07 16:38:24.467256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.686 [2024-06-07 16:38:24.467271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:57.686 [2024-06-07 16:38:24.477342] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f1ca0 00:29:57.686 [2024-06-07 16:38:24.478356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.686 [2024-06-07 16:38:24.478371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:57.686 [2024-06-07 16:38:24.488700] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eb328 00:29:57.686 [2024-06-07 16:38:24.489579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.686 [2024-06-07 16:38:24.489594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:57.686 [2024-06-07 16:38:24.502724] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190feb58 00:29:57.686 [2024-06-07 16:38:24.503840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.686 [2024-06-07 16:38:24.503856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:57.686 [2024-06-07 16:38:24.514736] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fc560 00:29:57.686 [2024-06-07 16:38:24.516109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.686 [2024-06-07 16:38:24.516125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:57.686 [2024-06-07 16:38:24.526088] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6fa8 00:29:57.686 [2024-06-07 16:38:24.527320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.686 [2024-06-07 16:38:24.527334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:57.686 [2024-06-07 16:38:24.537408] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb480 00:29:57.947 [2024-06-07 16:38:24.538458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.538473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.548767] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7da8 00:29:57.947 [2024-06-07 16:38:24.549662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.549677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.563737] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f2d80 00:29:57.947 [2024-06-07 16:38:24.565681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.565696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.575022] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f2948 00:29:57.947 [2024-06-07 16:38:24.576776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.576791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.586389] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0bc0 00:29:57.947 [2024-06-07 16:38:24.588016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.588031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.597691] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f8a50 00:29:57.947 [2024-06-07 16:38:24.599107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.599122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.609072] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fd208 00:29:57.947 [2024-06-07 16:38:24.610345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.610360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.620373] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ecc78 00:29:57.947 [2024-06-07 16:38:24.621463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.621477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.631726] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ddc00 00:29:57.947 [2024-06-07 16:38:24.632745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.632760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.643108] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f9b30 00:29:57.947 [2024-06-07 16:38:24.643856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.643871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.657159] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eb328 00:29:57.947 [2024-06-07 16:38:24.658082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.658098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.668520] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f2510 00:29:57.947 [2024-06-07 16:38:24.669363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.669379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.679902] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190dece0 00:29:57.947 [2024-06-07 16:38:24.680867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.947 [2024-06-07 16:38:24.680882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:57.947 [2024-06-07 16:38:24.691255] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f81e0 00:29:57.948 [2024-06-07 16:38:24.692068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.692083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.706210] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7538 00:29:57.948 [2024-06-07 16:38:24.708070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.708086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.717518] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f3e60 00:29:57.948 [2024-06-07 16:38:24.719182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.719196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.728910] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ebb98 00:29:57.948 [2024-06-07 16:38:24.730437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.730455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.740219] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e01f8 00:29:57.948 [2024-06-07 16:38:24.741553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.741568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.751587] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e49b0 00:29:57.948 [2024-06-07 16:38:24.752790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.752805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.762925] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e7818 00:29:57.948 [2024-06-07 16:38:24.763925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.763940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.774295] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e0a68 00:29:57.948 [2024-06-07 16:38:24.775152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.775167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.788326] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f2510 00:29:57.948 [2024-06-07 16:38:24.789412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.948 [2024-06-07 16:38:24.789428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:57.948 [2024-06-07 16:38:24.799657] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190efae0 00:29:58.210 [2024-06-07 16:38:24.800495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.800510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.811008] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7538 00:29:58.210 [2024-06-07 16:38:24.811769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.811784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.824024] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f6cc8 00:29:58.210 [2024-06-07 16:38:24.825582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.825597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.833602] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f57b0 00:29:58.210 [2024-06-07 16:38:24.834459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.834475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.846192] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190de038 00:29:58.210 [2024-06-07 16:38:24.847035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.847051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.859559] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7538 00:29:58.210 [2024-06-07 16:38:24.861072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.861088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.869131] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f9f68 00:29:58.210 [2024-06-07 16:38:24.869942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.869958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.881630] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e1b48 00:29:58.210 [2024-06-07 16:38:24.882446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.882461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.896568] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e1710 00:29:58.210 [2024-06-07 16:38:24.898442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.898457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.907944] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fe2e8 00:29:58.210 [2024-06-07 16:38:24.909658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.909674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.919253] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6fa8 00:29:58.210 [2024-06-07 16:38:24.920773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.920788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.930627] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e49b0 00:29:58.210 [2024-06-07 16:38:24.932000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.932015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.941959] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e0ea0 00:29:58.210 [2024-06-07 16:38:24.943152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.953323] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e7818 00:29:58.210 [2024-06-07 16:38:24.954363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.954379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.964608] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f1430 00:29:58.210 [2024-06-07 16:38:24.965448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.965463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.978642] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb8b8 00:29:58.210 [2024-06-07 16:38:24.979677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.979692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:24.989980] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e88f8 00:29:58.210 [2024-06-07 16:38:24.990909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:24.990924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:25.001299] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0bc0 00:29:58.210 [2024-06-07 16:38:25.001983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:25.001999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:25.012622] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e2c28 00:29:58.210 [2024-06-07 16:38:25.013213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:25.013228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:25.026190] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6300 00:29:58.210 [2024-06-07 16:38:25.027627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:25.027642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:25.037493] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f4298 00:29:58.210 [2024-06-07 16:38:25.038716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:25.038733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:25.048854] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fda78 00:29:58.210 [2024-06-07 16:38:25.049932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:25.049948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.210 [2024-06-07 16:38:25.060146] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e23b8 00:29:58.210 [2024-06-07 16:38:25.061056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.210 [2024-06-07 16:38:25.061071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:58.471 [2024-06-07 16:38:25.071501] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fe2e8 00:29:58.471 [2024-06-07 16:38:25.072242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.471 [2024-06-07 16:38:25.072257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:58.471 [2024-06-07 16:38:25.086462] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ee5c8 00:29:58.471 [2024-06-07 16:38:25.088258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.471 [2024-06-07 16:38:25.088273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:58.471 [2024-06-07 16:38:25.097755] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190edd58 00:29:58.471 [2024-06-07 16:38:25.099354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.471 [2024-06-07 16:38:25.099370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.471 [2024-06-07 16:38:25.109134] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ec408 00:29:58.472 [2024-06-07 16:38:25.110599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.110615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.120440] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0350 00:29:58.472 [2024-06-07 16:38:25.121708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.121723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.131850] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f0350 00:29:58.472 [2024-06-07 16:38:25.132978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.132993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.143150] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eb760 00:29:58.472 [2024-06-07 16:38:25.144083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.144098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.154506] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e3498 00:29:58.472 [2024-06-07 16:38:25.155293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.155308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.168535] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eff18 00:29:58.472 [2024-06-07 16:38:25.169549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.169565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.179853] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ed0b0 00:29:58.472 [2024-06-07 16:38:25.180627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.180642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.191220] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190df550 00:29:58.472 [2024-06-07 16:38:25.191898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.191914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.203080] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e8088 00:29:58.472 [2024-06-07 16:38:25.204016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.204032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.214306] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e7818 00:29:58.472 [2024-06-07 16:38:25.215220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.215235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.226102] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6300 00:29:58.472 [2024-06-07 16:38:25.226994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.237965] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190eb328 00:29:58.472 [2024-06-07 16:38:25.238844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.238859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.250553] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190ef6a8 00:29:58.472 [2024-06-07 16:38:25.251420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.251436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.263916] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f57b0 00:29:58.472 [2024-06-07 16:38:25.265458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.275774] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e8d30 00:29:58.472 [2024-06-07 16:38:25.277303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.277318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.286534] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f5378 00:29:58.472 [2024-06-07 16:38:25.287556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.287571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.297787] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f7538 00:29:58.472 [2024-06-07 16:38:25.298786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.298801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.309654] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190e6738 00:29:58.472 [2024-06-07 16:38:25.310642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.472 [2024-06-07 16:38:25.310657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:58.472 [2024-06-07 16:38:25.324388] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190fb048 00:29:58.733 [2024-06-07 16:38:25.326245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.733 [2024-06-07 16:38:25.326261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:58.733 [2024-06-07 16:38:25.336183] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190f6cc8 00:29:58.733 [2024-06-07 16:38:25.338035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.733 [2024-06-07 16:38:25.338051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:58.733 [2024-06-07 16:38:25.346689] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672c10) with pdu=0x2000190de470 00:29:58.733 [2024-06-07 16:38:25.347822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.733 [2024-06-07 16:38:25.347840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:58.733 00:29:58.733 Latency(us) 00:29:58.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.733 nvme0n1 : 2.01 21441.17 83.75 0.00 0.00 5961.38 2512.21 16274.77 00:29:58.733 =================================================================================================================== 00:29:58.733 Total : 21441.17 83.75 0.00 0.00 5961.38 2512.21 16274.77 00:29:58.733 0 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:58.733 | .driver_specific 00:29:58.733 | .nvme_error 00:29:58.733 | .status_code 00:29:58.733 | .command_transient_transport_error' 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3286894 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3286894 ']' 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3286894 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:58.733 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3286894 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3286894' 00:29:58.994 killing process with pid 3286894 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3286894 00:29:58.994 Received shutdown signal, test time was about 2.000000 seconds 00:29:58.994 00:29:58.994 Latency(us) 00:29:58.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.994 =================================================================================================================== 00:29:58.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3286894 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3287590 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3287590 /var/tmp/bperf.sock 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3287590 ']' 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:58.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:58.994 16:38:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.994 [2024-06-07 16:38:25.766160] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:29:58.994 [2024-06-07 16:38:25.766210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287590 ] 00:29:58.994 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:58.994 Zero copy mechanism will not be used. 00:29:58.994 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.994 [2024-06-07 16:38:25.840137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.254 [2024-06-07 16:38:25.892312] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.825 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:00.085 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:00.085 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.085 16:38:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.347 nvme0n1 00:30:00.347 16:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:00.347 16:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:00.347 16:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:00.347 16:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:00.347 16:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:00.347 16:38:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.347 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:00.347 Zero copy mechanism will not be used. 00:30:00.347 Running I/O for 2 seconds... 00:30:00.347 [2024-06-07 16:38:27.196353] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.347 [2024-06-07 16:38:27.196633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.347 [2024-06-07 16:38:27.196667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.210048] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.210464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.210483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.221734] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.222090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.222107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.233360] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.233765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.233783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.243872] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.244202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.244219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.252441] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.252712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.252729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.260939] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.261266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.261283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.270067] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.270392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.270414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.278998] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.279328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.279345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.287943] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.288282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.288299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.296454] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.296769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.296785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.303398] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.303625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.303642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.311436] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.311796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.311812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.318529] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.318855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.318872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.326953] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.327268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.327285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.336439] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.336788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.336805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.346191] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.346508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.346525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.352566] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.352899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.352916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.360469] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.360805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.360822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.369538] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.369899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.369915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.378178] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.378480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.378497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.388960] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.389107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.389123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.397442] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.397788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.397805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.407165] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.407485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.407502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.414035] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.608 [2024-06-07 16:38:27.414371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.608 [2024-06-07 16:38:27.414388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.608 [2024-06-07 16:38:27.422101] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.609 [2024-06-07 16:38:27.422321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.609 [2024-06-07 16:38:27.422337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.609 [2024-06-07 16:38:27.427812] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.609 [2024-06-07 16:38:27.428023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.609 [2024-06-07 16:38:27.428042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.609 [2024-06-07 16:38:27.435366] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.609 [2024-06-07 16:38:27.435704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.609 [2024-06-07 16:38:27.435721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.609 [2024-06-07 16:38:27.443062] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.609 [2024-06-07 16:38:27.443176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.609 [2024-06-07 16:38:27.443191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.609 [2024-06-07 16:38:27.454317] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.609 [2024-06-07 16:38:27.454663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.609 [2024-06-07 16:38:27.454680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.462150] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.462473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.462490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.470016] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.470358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.470374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.477234] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.477326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.477342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.483069] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.483280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.483296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.489068] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.489387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.489408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.495317] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.495660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.495677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.501234] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.501569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.501585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.508620] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.508841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.508859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.517999] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.869 [2024-06-07 16:38:27.518304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.869 [2024-06-07 16:38:27.518321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.869 [2024-06-07 16:38:27.526834] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.527178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.527195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.536901] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.537210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.537227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.546788] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.547098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.547114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.556255] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.556583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.556599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.565984] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.566336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.566356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.575264] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.575612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.575629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.582933] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.583242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.583258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.592052] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.592375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.592392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.601149] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.601498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.601516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.608737] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.608813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.608828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.617734] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.618054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.618071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.627641] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.627969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.627986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.634877] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.635179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.635196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.646309] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.646668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.646684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.658346] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.658463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.669768] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.670077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.670094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.681830] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.682158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.682175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.693192] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.693549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.693566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.704819] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.705166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.705182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.713311] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.713655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.713672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.870 [2024-06-07 16:38:27.721151] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:00.870 [2024-06-07 16:38:27.721485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.870 [2024-06-07 16:38:27.721502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.728846] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.728920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.728935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.739760] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.740072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.740089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.750745] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.751078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.751094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.760848] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.760927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.760942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.771340] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.771652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.771668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.781661] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.782005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.782021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.793243] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.793560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.793577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.804591] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.804931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.804947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.816074] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.816397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.816419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.827724] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.828066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.828089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.839941] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.840265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.840281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.851778] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.852131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.852148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.863132] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.863461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.863478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.131 [2024-06-07 16:38:27.872744] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.131 [2024-06-07 16:38:27.873085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.131 [2024-06-07 16:38:27.873101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.882941] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.883286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.883304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.893916] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.894047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.894063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.904667] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.904990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.905006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.913252] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.913571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.913588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.922976] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.923320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.923336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.931580] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.931915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.931932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.940604] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.940825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.940842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.949309] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.949383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.949397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.960098] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.960407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.960423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.966661] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.967072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.967088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.973799] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.974002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.974019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.132 [2024-06-07 16:38:27.980980] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.132 [2024-06-07 16:38:27.981320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.132 [2024-06-07 16:38:27.981337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:27.991120] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:27.991511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:27.991528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.002112] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.002343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.002360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.012293] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.012641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.012658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.022859] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.023246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.023263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.033155] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.033558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.033575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.042306] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.042618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.042635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.052169] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.052537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.052554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.061982] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.062318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.062334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.072511] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.072846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.081756] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.082075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.082095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.091055] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.091458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.091474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.100366] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.100751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.100768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.110429] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.110749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.110766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.117358] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.117734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.117750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.125636] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.125953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.125969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.132313] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.132521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.132537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.138051] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.138424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.138440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.147165] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.147366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.147382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.155444] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.155783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.155800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.165059] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.165418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.165435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.173350] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.392 [2024-06-07 16:38:28.173575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.392 [2024-06-07 16:38:28.173591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.392 [2024-06-07 16:38:28.181152] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.181352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.181368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.188812] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.189194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.189210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.196290] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.196494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.196510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.205449] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.205765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.205782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.214600] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.214913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.214930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.220920] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.221305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.221321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.228210] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.228346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.228362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.393 [2024-06-07 16:38:28.237244] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.393 [2024-06-07 16:38:28.237483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.393 [2024-06-07 16:38:28.237499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.246236] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.246612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.246628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.255418] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.255718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.255734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.262945] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.263146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.263162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.271720] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.272081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.272098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.280342] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.280692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.280709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.290186] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.290532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.290548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.300518] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.300837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.300854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.311395] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.311760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.311776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.322508] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.322757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.322774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.331696] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.332084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.332100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.340433] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.340827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.340843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.351347] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.351747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.351764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.360662] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.361046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.361062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.371515] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.371981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.371997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.383312] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.383580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.383597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.394689] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.395113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.395129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.406139] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.406488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.417444] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.417838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.417855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.428316] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.428730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.428747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.439142] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.439418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.439434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.447013] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.447246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.447262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.457227] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.654 [2024-06-07 16:38:28.457638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.654 [2024-06-07 16:38:28.457655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.654 [2024-06-07 16:38:28.466553] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.655 [2024-06-07 16:38:28.466949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.655 [2024-06-07 16:38:28.466965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.655 [2024-06-07 16:38:28.476367] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.655 [2024-06-07 16:38:28.476877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.655 [2024-06-07 16:38:28.476896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.655 [2024-06-07 16:38:28.487664] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.655 [2024-06-07 16:38:28.488003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.655 [2024-06-07 16:38:28.488019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.655 [2024-06-07 16:38:28.496909] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.655 [2024-06-07 16:38:28.497318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.655 [2024-06-07 16:38:28.497335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.655 [2024-06-07 16:38:28.505574] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.655 [2024-06-07 16:38:28.505776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.655 [2024-06-07 16:38:28.505793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.515451] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.515872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.515889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.525050] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.525310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.525326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.533597] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.533808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.533824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.540076] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.540240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.540256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.546007] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.546355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.546372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.552318] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.552698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.552715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.558202] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.558542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.558559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.916 [2024-06-07 16:38:28.566755] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.916 [2024-06-07 16:38:28.567101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.916 [2024-06-07 16:38:28.567118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.573476] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.573869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.573885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.580924] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.581246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.581262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.589775] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.589986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.590002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.596311] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.596514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.596530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.605476] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.605691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.605706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.612883] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.613082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.613098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.620397] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.620900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.620916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.626493] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.626697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.626713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.634182] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.634413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.634430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.640434] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.640899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.640915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.649102] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.649328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.649345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.655748] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.656059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.656075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.664827] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.665163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.665180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.670375] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.670755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.670771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.678691] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.679046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.685127] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.685326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.685342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.691214] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.691556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.691573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.698053] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.698263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.698279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.703606] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.703907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.703923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.710096] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.710294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.710310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.718591] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.718948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.726549] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.726932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.726950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.734158] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.734358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.734374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.741034] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.741410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.741427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.749695] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.749850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.749865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.757155] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.757483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.917 [2024-06-07 16:38:28.757500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.917 [2024-06-07 16:38:28.766505] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:01.917 [2024-06-07 16:38:28.766907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.918 [2024-06-07 16:38:28.766924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.776641] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.777049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.777065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.787058] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.787390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.787413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.795025] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.795223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.795239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.800343] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.800544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.800561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.805261] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.805466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.805482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.811692] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.812022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.812038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.179 [2024-06-07 16:38:28.819636] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.179 [2024-06-07 16:38:28.819835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.179 [2024-06-07 16:38:28.819852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.826850] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.827065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.833554] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.833771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.833788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.840680] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.841092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.841108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.851831] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.852052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.861752] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.862182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.862199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.872088] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.872333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.872349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.883038] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.883407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.883427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.892042] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.892531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.892548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.902055] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.902344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.902361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.911637] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.911903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.911920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.921695] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.922046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.922062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.933270] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.933610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.933627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.944728] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.945022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.945039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.955903] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.956216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.956232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.967648] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.968043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.968059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.979188] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.979627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.979644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:28.991282] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:28.991687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:28.991704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:29.002561] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:29.002914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:29.002931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:29.013007] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:29.013332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:29.013348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:29.022513] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:29.022917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:29.022933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.180 [2024-06-07 16:38:29.031460] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.180 [2024-06-07 16:38:29.031876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.180 [2024-06-07 16:38:29.031892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.041983] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.042281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.051791] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.052060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.052077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.060578] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.060854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.060874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.070925] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.071309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.071326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.081040] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.081372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.081389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.091662] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.092045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.092062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.101982] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.102352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.102369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.113340] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.113727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.113743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.125482] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.125919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.125935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.137281] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.137697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.137713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.149240] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.149543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.149559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.161034] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.161297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.161313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.173480] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.441 [2024-06-07 16:38:29.173910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.441 [2024-06-07 16:38:29.173926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:02.441 [2024-06-07 16:38:29.184346] tcp.c:2077:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1672f50) with pdu=0x2000190fef90 00:30:02.442 [2024-06-07 16:38:29.184716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.442 [2024-06-07 16:38:29.184731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:02.442 00:30:02.442 Latency(us) 00:30:02.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.442 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:02.442 nvme0n1 : 2.01 3385.50 423.19 0.00 0.00 4716.02 2184.53 15400.96 00:30:02.442 =================================================================================================================== 00:30:02.442 Total : 3385.50 423.19 0.00 0.00 4716.02 2184.53 15400.96 00:30:02.442 0 00:30:02.442 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:02.442 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:02.442 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:02.442 | .driver_specific 00:30:02.442 | .nvme_error 00:30:02.442 | .status_code 00:30:02.442 | .command_transient_transport_error' 00:30:02.442 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3287590 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3287590 ']' 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3287590 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3287590 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3287590' 00:30:02.703 killing process with pid 3287590 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3287590 00:30:02.703 Received shutdown signal, test time was about 2.000000 seconds 00:30:02.703 00:30:02.703 Latency(us) 00:30:02.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.703 =================================================================================================================== 00:30:02.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3287590 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3285175 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3285175 ']' 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3285175 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:02.703 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3285175 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3285175' 00:30:02.965 killing process with pid 3285175 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3285175 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3285175 00:30:02.965 00:30:02.965 real 0m16.172s 00:30:02.965 user 0m31.489s 00:30:02.965 sys 0m3.370s 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.965 ************************************ 00:30:02.965 END TEST nvmf_digest_error 00:30:02.965 ************************************ 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.965 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.965 rmmod nvme_tcp 00:30:02.965 rmmod nvme_fabrics 00:30:03.225 rmmod nvme_keyring 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3285175 ']' 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3285175 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 3285175 ']' 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 3285175 00:30:03.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3285175) - No such process 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 3285175 is not found' 00:30:03.225 Process with pid 3285175 is not found 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.225 16:38:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.190 16:38:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:05.190 00:30:05.190 real 0m42.135s 00:30:05.190 user 1m5.778s 00:30:05.190 sys 0m12.008s 00:30:05.190 16:38:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:05.190 16:38:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:05.190 ************************************ 00:30:05.190 END TEST nvmf_digest 00:30:05.190 ************************************ 00:30:05.190 16:38:31 nvmf_tcp -- nvmf/nvmf.sh@112 -- # [[ 0 -eq 1 ]] 00:30:05.190 16:38:31 nvmf_tcp -- nvmf/nvmf.sh@117 -- # [[ 0 -eq 1 ]] 00:30:05.190 16:38:31 nvmf_tcp -- nvmf/nvmf.sh@122 -- # [[ phy == phy ]] 00:30:05.190 16:38:31 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:05.190 16:38:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:05.190 16:38:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:05.190 16:38:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.190 ************************************ 00:30:05.190 START TEST nvmf_bdevperf 00:30:05.190 ************************************ 00:30:05.190 16:38:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:05.450 * Looking for test storage... 00:30:05.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.450 16:38:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.451 16:38:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:12.038 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:12.038 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:12.038 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:12.038 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.038 16:38:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.299 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.299 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.299 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:12.299 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.299 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.300 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.300 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:12.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:30:12.300 00:30:12.300 --- 10.0.0.2 ping statistics --- 00:30:12.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.300 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:30:12.561 00:30:12.561 --- 10.0.0.1 ping statistics --- 00:30:12.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.561 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3292406 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3292406 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 3292406 ']' 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:12.561 16:38:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.562 [2024-06-07 16:38:39.265031] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:12.562 [2024-06-07 16:38:39.265098] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.562 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.562 [2024-06-07 16:38:39.354533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.823 [2024-06-07 16:38:39.450588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.823 [2024-06-07 16:38:39.450648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.823 [2024-06-07 16:38:39.450656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.823 [2024-06-07 16:38:39.450663] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.823 [2024-06-07 16:38:39.450669] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.823 [2024-06-07 16:38:39.450807] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.823 [2024-06-07 16:38:39.451347] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.823 [2024-06-07 16:38:39.451349] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 [2024-06-07 16:38:40.097171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 Malloc0 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.396 [2024-06-07 16:38:40.161638] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.396 { 00:30:13.396 "params": { 00:30:13.396 "name": "Nvme$subsystem", 00:30:13.396 "trtype": "$TEST_TRANSPORT", 00:30:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.396 "adrfam": "ipv4", 00:30:13.396 "trsvcid": "$NVMF_PORT", 00:30:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.396 "hdgst": ${hdgst:-false}, 00:30:13.396 "ddgst": ${ddgst:-false} 00:30:13.396 }, 00:30:13.396 "method": "bdev_nvme_attach_controller" 00:30:13.396 } 00:30:13.396 EOF 00:30:13.396 )") 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:13.396 16:38:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.396 "params": { 00:30:13.396 "name": "Nvme1", 00:30:13.396 "trtype": "tcp", 00:30:13.396 "traddr": "10.0.0.2", 00:30:13.396 "adrfam": "ipv4", 00:30:13.396 "trsvcid": "4420", 00:30:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:13.396 "hdgst": false, 00:30:13.396 "ddgst": false 00:30:13.396 }, 00:30:13.396 "method": "bdev_nvme_attach_controller" 00:30:13.396 }' 00:30:13.396 [2024-06-07 16:38:40.217945] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:13.396 [2024-06-07 16:38:40.218031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292634 ] 00:30:13.396 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.657 [2024-06-07 16:38:40.281575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.657 [2024-06-07 16:38:40.345958] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.918 Running I/O for 1 seconds... 00:30:14.861 00:30:14.861 Latency(us) 00:30:14.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.861 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:14.861 Verification LBA range: start 0x0 length 0x4000 00:30:14.861 Nvme1n1 : 1.01 8846.93 34.56 0.00 0.00 14395.63 2430.29 15073.28 00:30:14.861 =================================================================================================================== 00:30:14.861 Total : 8846.93 34.56 0.00 0.00 14395.63 2430.29 15073.28 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3292971 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:14.861 { 00:30:14.861 "params": { 00:30:14.861 "name": "Nvme$subsystem", 00:30:14.861 "trtype": "$TEST_TRANSPORT", 00:30:14.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.861 "adrfam": "ipv4", 00:30:14.861 "trsvcid": "$NVMF_PORT", 00:30:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.861 "hdgst": ${hdgst:-false}, 00:30:14.861 "ddgst": ${ddgst:-false} 00:30:14.861 }, 00:30:14.861 "method": "bdev_nvme_attach_controller" 00:30:14.861 } 00:30:14.861 EOF 00:30:14.861 )") 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:14.861 16:38:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:14.861 "params": { 00:30:14.861 "name": "Nvme1", 00:30:14.861 "trtype": "tcp", 00:30:14.861 "traddr": "10.0.0.2", 00:30:14.861 "adrfam": "ipv4", 00:30:14.861 "trsvcid": "4420", 00:30:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:14.861 "hdgst": false, 00:30:14.861 "ddgst": false 00:30:14.861 }, 00:30:14.861 "method": "bdev_nvme_attach_controller" 00:30:14.861 }' 00:30:15.122 [2024-06-07 16:38:41.725004] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:15.122 [2024-06-07 16:38:41.725060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292971 ] 00:30:15.122 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.122 [2024-06-07 16:38:41.783940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.122 [2024-06-07 16:38:41.847851] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.382 Running I/O for 15 seconds... 00:30:17.929 16:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3292406 00:30:17.929 16:38:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:17.929 [2024-06-07 16:38:44.691485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.929 [2024-06-07 16:38:44.691526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.929 [2024-06-07 16:38:44.691563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.929 [2024-06-07 16:38:44.691584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.929 [2024-06-07 16:38:44.691604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.929 [2024-06-07 16:38:44.691620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.929 [2024-06-07 16:38:44.691637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.929 [2024-06-07 16:38:44.691655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.929 [2024-06-07 16:38:44.691665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.691984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.691992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.930 [2024-06-07 16:38:44.692321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.930 [2024-06-07 16:38:44.692327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.931 [2024-06-07 16:38:44.692963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.931 [2024-06-07 16:38:44.692972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.692979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.692988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.692995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.932 [2024-06-07 16:38:44.693381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.932 [2024-06-07 16:38:44.693579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.932 [2024-06-07 16:38:44.693588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.933 [2024-06-07 16:38:44.693722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafc340 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.693741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:17.933 [2024-06-07 16:38:44.693747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:17.933 [2024-06-07 16:38:44.693753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93840 len:8 PRP1 0x0 PRP2 0x0 00:30:17.933 [2024-06-07 16:38:44.693761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.933 [2024-06-07 16:38:44.693799] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xafc340 was disconnected and freed. reset controller. 00:30:17.933 [2024-06-07 16:38:44.697298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.933 [2024-06-07 16:38:44.697344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.698129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-06-07 16:38:44.698145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:17.933 [2024-06-07 16:38:44.698154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.698374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.698600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.933 [2024-06-07 16:38:44.698609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.933 [2024-06-07 16:38:44.698616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.933 [2024-06-07 16:38:44.702169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.933 [2024-06-07 16:38:44.711388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.933 [2024-06-07 16:38:44.712102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-06-07 16:38:44.712141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:17.933 [2024-06-07 16:38:44.712152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.712393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.712623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.933 [2024-06-07 16:38:44.712632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.933 [2024-06-07 16:38:44.712640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.933 [2024-06-07 16:38:44.716193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.933 [2024-06-07 16:38:44.725190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.933 [2024-06-07 16:38:44.725883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-06-07 16:38:44.725920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:17.933 [2024-06-07 16:38:44.725931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.726171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.726394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.933 [2024-06-07 16:38:44.726411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.933 [2024-06-07 16:38:44.726423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.933 [2024-06-07 16:38:44.729983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.933 [2024-06-07 16:38:44.739187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.933 [2024-06-07 16:38:44.739839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-06-07 16:38:44.739857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:17.933 [2024-06-07 16:38:44.739865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.740085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.740304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.933 [2024-06-07 16:38:44.740312] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.933 [2024-06-07 16:38:44.740319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.933 [2024-06-07 16:38:44.743867] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.933 [2024-06-07 16:38:44.753063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.933 [2024-06-07 16:38:44.753663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-06-07 16:38:44.753679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:17.933 [2024-06-07 16:38:44.753687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.753906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.754126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.933 [2024-06-07 16:38:44.754133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.933 [2024-06-07 16:38:44.754140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.933 [2024-06-07 16:38:44.757687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.933 [2024-06-07 16:38:44.766885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.933 [2024-06-07 16:38:44.767518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.933 [2024-06-07 16:38:44.767556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:17.933 [2024-06-07 16:38:44.767568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:17.933 [2024-06-07 16:38:44.767808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:17.933 [2024-06-07 16:38:44.768031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.933 [2024-06-07 16:38:44.768040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.933 [2024-06-07 16:38:44.768047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.933 [2024-06-07 16:38:44.771605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.196 [2024-06-07 16:38:44.780807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.196 [2024-06-07 16:38:44.781505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.196 [2024-06-07 16:38:44.781542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.196 [2024-06-07 16:38:44.781554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.196 [2024-06-07 16:38:44.781797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.196 [2024-06-07 16:38:44.782020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.196 [2024-06-07 16:38:44.782029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.196 [2024-06-07 16:38:44.782037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.196 [2024-06-07 16:38:44.785595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.196 [2024-06-07 16:38:44.794799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.196 [2024-06-07 16:38:44.795422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.196 [2024-06-07 16:38:44.795441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.196 [2024-06-07 16:38:44.795448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.196 [2024-06-07 16:38:44.795668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.196 [2024-06-07 16:38:44.795887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.196 [2024-06-07 16:38:44.795895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.196 [2024-06-07 16:38:44.795902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.196 [2024-06-07 16:38:44.799449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.196 [2024-06-07 16:38:44.808643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.196 [2024-06-07 16:38:44.809374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.196 [2024-06-07 16:38:44.809419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.196 [2024-06-07 16:38:44.809432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.196 [2024-06-07 16:38:44.809674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.196 [2024-06-07 16:38:44.809897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.196 [2024-06-07 16:38:44.809906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.196 [2024-06-07 16:38:44.809913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.196 [2024-06-07 16:38:44.813465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.196 [2024-06-07 16:38:44.822460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.196 [2024-06-07 16:38:44.823059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.196 [2024-06-07 16:38:44.823096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.196 [2024-06-07 16:38:44.823108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.196 [2024-06-07 16:38:44.823353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.196 [2024-06-07 16:38:44.823585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.196 [2024-06-07 16:38:44.823595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.196 [2024-06-07 16:38:44.823602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.196 [2024-06-07 16:38:44.827152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.196 [2024-06-07 16:38:44.836364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.196 [2024-06-07 16:38:44.837094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.196 [2024-06-07 16:38:44.837132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.196 [2024-06-07 16:38:44.837142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.196 [2024-06-07 16:38:44.837381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.196 [2024-06-07 16:38:44.837611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.196 [2024-06-07 16:38:44.837621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.196 [2024-06-07 16:38:44.837628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.196 [2024-06-07 16:38:44.841180] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.196 [2024-06-07 16:38:44.850211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.196 [2024-06-07 16:38:44.850832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.850850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.850858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.851078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.851296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.851304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.851311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.854860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.864060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.864795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.864832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.864843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.865082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.865305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.865314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.865325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.868885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.877881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.878680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.878717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.878728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.878967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.879190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.879198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.879206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.882764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.891764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.892504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.892542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.892554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.892797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.893020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.893029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.893036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.896594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.905597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.906353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.906390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.906410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.906653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.906876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.906885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.906892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.910443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.919467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.920156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.920198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.920209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.920456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.920680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.920688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.920696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.924245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.933462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.934053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.934091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.934102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.934341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.934571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.934580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.934588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.938137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.947348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.948093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.948131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.948142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.948380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.948611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.948620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.948628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.952176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.961171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.961802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.961820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.961828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.962047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.962271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.962279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.962286] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.965835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.975030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.975696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.975733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.975744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.975983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.197 [2024-06-07 16:38:44.976205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.197 [2024-06-07 16:38:44.976213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.197 [2024-06-07 16:38:44.976221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.197 [2024-06-07 16:38:44.979776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.197 [2024-06-07 16:38:44.988973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.197 [2024-06-07 16:38:44.989692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.197 [2024-06-07 16:38:44.989729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.197 [2024-06-07 16:38:44.989740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.197 [2024-06-07 16:38:44.989979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.198 [2024-06-07 16:38:44.990203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.198 [2024-06-07 16:38:44.990211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.198 [2024-06-07 16:38:44.990218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.198 [2024-06-07 16:38:44.993779] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.198 [2024-06-07 16:38:45.002776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.198 [2024-06-07 16:38:45.003271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.198 [2024-06-07 16:38:45.003293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.198 [2024-06-07 16:38:45.003301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.198 [2024-06-07 16:38:45.003532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.198 [2024-06-07 16:38:45.003752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.198 [2024-06-07 16:38:45.003760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.198 [2024-06-07 16:38:45.003766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.198 [2024-06-07 16:38:45.007311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.198 [2024-06-07 16:38:45.016721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.198 [2024-06-07 16:38:45.017446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.198 [2024-06-07 16:38:45.017483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.198 [2024-06-07 16:38:45.017494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.198 [2024-06-07 16:38:45.017733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.198 [2024-06-07 16:38:45.017956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.198 [2024-06-07 16:38:45.017965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.198 [2024-06-07 16:38:45.017972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.198 [2024-06-07 16:38:45.021529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.198 [2024-06-07 16:38:45.030534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.198 [2024-06-07 16:38:45.031264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.198 [2024-06-07 16:38:45.031300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.198 [2024-06-07 16:38:45.031311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.198 [2024-06-07 16:38:45.031558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.198 [2024-06-07 16:38:45.031782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.198 [2024-06-07 16:38:45.031790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.198 [2024-06-07 16:38:45.031798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.198 [2024-06-07 16:38:45.035346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.198 [2024-06-07 16:38:45.044336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.198 [2024-06-07 16:38:45.045064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.198 [2024-06-07 16:38:45.045101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.198 [2024-06-07 16:38:45.045112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.198 [2024-06-07 16:38:45.045351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.198 [2024-06-07 16:38:45.045583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.198 [2024-06-07 16:38:45.045592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.198 [2024-06-07 16:38:45.045600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.459 [2024-06-07 16:38:45.049149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.459 [2024-06-07 16:38:45.058138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.459 [2024-06-07 16:38:45.058824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.459 [2024-06-07 16:38:45.058862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.459 [2024-06-07 16:38:45.058878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.459 [2024-06-07 16:38:45.059117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.459 [2024-06-07 16:38:45.059340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.459 [2024-06-07 16:38:45.059348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.459 [2024-06-07 16:38:45.059355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.459 [2024-06-07 16:38:45.062912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.459 [2024-06-07 16:38:45.072110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.459 [2024-06-07 16:38:45.072799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.459 [2024-06-07 16:38:45.072836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.459 [2024-06-07 16:38:45.072847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.459 [2024-06-07 16:38:45.073085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.459 [2024-06-07 16:38:45.073309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.459 [2024-06-07 16:38:45.073317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.459 [2024-06-07 16:38:45.073325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.459 [2024-06-07 16:38:45.076883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.459 [2024-06-07 16:38:45.086087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.459 [2024-06-07 16:38:45.086789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.459 [2024-06-07 16:38:45.086826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.086837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.087076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.087299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.087307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.087315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.090871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.100077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.100751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.100788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.100799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.101038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.101260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.101272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.101280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.104839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.114050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.114665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.114683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.114691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.114911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.115129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.115136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.115143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.118690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.127880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.128619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.128656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.128668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.128908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.129131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.129140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.129147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.132720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.141709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.142419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.142456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.142468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.142710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.142933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.142942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.142950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.146506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.155700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.156443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.156479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.156490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.156729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.156952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.156961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.156968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.160526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.169560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.170263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.170300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.170310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.170558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.170782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.170790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.170798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.174346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.183544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.184228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.184266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.184278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.184529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.184753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.184761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.184768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.188319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.197527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.198236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.198274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.460 [2024-06-07 16:38:45.198284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.460 [2024-06-07 16:38:45.198536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.460 [2024-06-07 16:38:45.198760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.460 [2024-06-07 16:38:45.198768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.460 [2024-06-07 16:38:45.198776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.460 [2024-06-07 16:38:45.202329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.460 [2024-06-07 16:38:45.211319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.460 [2024-06-07 16:38:45.212044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.460 [2024-06-07 16:38:45.212081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.212091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.212331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.212560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.212570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.212578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.216131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.225123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.225790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.225826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.225837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.226076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.226299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.226307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.226315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.229883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.239090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.239779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.239816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.239827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.240066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.240290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.240298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.240309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.243867] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.253073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.253766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.253803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.253813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.254052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.254275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.254284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.254291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.257851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.267055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.267675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.267712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.267723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.267962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.268185] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.268194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.268201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.271757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.280956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.281700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.281737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.281747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.281986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.282208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.282217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.282224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.285782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.294770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.295413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.295436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.295444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.295664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.295882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.295889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.295896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.461 [2024-06-07 16:38:45.299442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.461 [2024-06-07 16:38:45.308637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.461 [2024-06-07 16:38:45.309112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.461 [2024-06-07 16:38:45.309129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.461 [2024-06-07 16:38:45.309137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.461 [2024-06-07 16:38:45.309356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.461 [2024-06-07 16:38:45.309582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.461 [2024-06-07 16:38:45.309590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.461 [2024-06-07 16:38:45.309597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.723 [2024-06-07 16:38:45.313139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.723 [2024-06-07 16:38:45.322553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.723 [2024-06-07 16:38:45.323192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.723 [2024-06-07 16:38:45.323229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.723 [2024-06-07 16:38:45.323239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.723 [2024-06-07 16:38:45.323487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.723 [2024-06-07 16:38:45.323712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.723 [2024-06-07 16:38:45.323720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.723 [2024-06-07 16:38:45.323728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.723 [2024-06-07 16:38:45.327276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.723 [2024-06-07 16:38:45.336493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.723 [2024-06-07 16:38:45.337195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.723 [2024-06-07 16:38:45.337232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.723 [2024-06-07 16:38:45.337243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.723 [2024-06-07 16:38:45.337490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.723 [2024-06-07 16:38:45.337720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.723 [2024-06-07 16:38:45.337729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.723 [2024-06-07 16:38:45.337736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.723 [2024-06-07 16:38:45.341283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.723 [2024-06-07 16:38:45.350492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.723 [2024-06-07 16:38:45.351187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.351224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.351234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.351480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.351704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.351713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.351722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.355273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.364481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.365040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.365058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.365066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.365285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.365509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.365517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.365524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.369068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.378470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.379166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.379204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.379214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.379461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.379685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.379694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.379701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.383252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.392459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.393156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.393194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.393204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.393450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.393674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.393683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.393690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.397239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.406436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.406973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.406991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.406999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.407218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.407444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.407453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.407460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.411008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.420414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.421012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.421027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.421035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.421254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.421477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.421485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.421492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.425032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.434229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.434821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.434837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.434849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.435068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.435286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.435294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.435300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.438843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.448034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.448669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.448685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.448693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.448912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.449131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.449139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.449145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.452690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.724 [2024-06-07 16:38:45.461884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.724 [2024-06-07 16:38:45.462511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.724 [2024-06-07 16:38:45.462526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.724 [2024-06-07 16:38:45.462533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.724 [2024-06-07 16:38:45.462752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.724 [2024-06-07 16:38:45.462970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.724 [2024-06-07 16:38:45.462978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.724 [2024-06-07 16:38:45.462985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.724 [2024-06-07 16:38:45.466528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.475721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.476420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.476457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.476468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.476706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.476929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.476942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.476950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.480509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.489715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.490441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.490479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.490492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.490731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.490955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.490964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.490971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.494526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.503515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.504154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.504172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.504180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.504400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.504625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.504633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.504639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.508182] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.517391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.518058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.518095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.518106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.518345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.518578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.518587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.518595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.522146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.531371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.532105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.532142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.532153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.532391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.532623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.532632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.532640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.536189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.545182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.545908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.545946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.545956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.546195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.546427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.546437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.546444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.549993] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.558983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.559692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.559729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.559740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.559979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.560202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.560210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.560218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.725 [2024-06-07 16:38:45.563776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.725 [2024-06-07 16:38:45.572775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.725 [2024-06-07 16:38:45.573389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.725 [2024-06-07 16:38:45.573433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.725 [2024-06-07 16:38:45.573445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.725 [2024-06-07 16:38:45.573689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.725 [2024-06-07 16:38:45.573912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.725 [2024-06-07 16:38:45.573921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.725 [2024-06-07 16:38:45.573928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.988 [2024-06-07 16:38:45.577481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.988 [2024-06-07 16:38:45.586680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.988 [2024-06-07 16:38:45.587364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.988 [2024-06-07 16:38:45.587409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.988 [2024-06-07 16:38:45.587422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.988 [2024-06-07 16:38:45.587662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.988 [2024-06-07 16:38:45.587885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.988 [2024-06-07 16:38:45.587894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.988 [2024-06-07 16:38:45.587901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.988 [2024-06-07 16:38:45.591454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.988 [2024-06-07 16:38:45.600658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.988 [2024-06-07 16:38:45.601388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.988 [2024-06-07 16:38:45.601431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.988 [2024-06-07 16:38:45.601442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.988 [2024-06-07 16:38:45.601681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.988 [2024-06-07 16:38:45.601904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.988 [2024-06-07 16:38:45.601912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.988 [2024-06-07 16:38:45.601920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.988 [2024-06-07 16:38:45.605472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.988 [2024-06-07 16:38:45.614462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.988 [2024-06-07 16:38:45.615154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.988 [2024-06-07 16:38:45.615191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.988 [2024-06-07 16:38:45.615202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.988 [2024-06-07 16:38:45.615450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.988 [2024-06-07 16:38:45.615675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.988 [2024-06-07 16:38:45.615683] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.988 [2024-06-07 16:38:45.615695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.988 [2024-06-07 16:38:45.619250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.988 [2024-06-07 16:38:45.628457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.988 [2024-06-07 16:38:45.629109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.988 [2024-06-07 16:38:45.629146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.988 [2024-06-07 16:38:45.629157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.988 [2024-06-07 16:38:45.629396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.988 [2024-06-07 16:38:45.629638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.988 [2024-06-07 16:38:45.629647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.988 [2024-06-07 16:38:45.629655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.988 [2024-06-07 16:38:45.633206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.988 [2024-06-07 16:38:45.642412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.988 [2024-06-07 16:38:45.643134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.988 [2024-06-07 16:38:45.643171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.988 [2024-06-07 16:38:45.643182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.988 [2024-06-07 16:38:45.643429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.988 [2024-06-07 16:38:45.643653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.988 [2024-06-07 16:38:45.643662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.988 [2024-06-07 16:38:45.643669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.988 [2024-06-07 16:38:45.647218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.988 [2024-06-07 16:38:45.656215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.988 [2024-06-07 16:38:45.656879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.988 [2024-06-07 16:38:45.656916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.988 [2024-06-07 16:38:45.656926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.988 [2024-06-07 16:38:45.657165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.988 [2024-06-07 16:38:45.657389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.657397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.657414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.660961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.670159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.670846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.670887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.670898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.671137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.671360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.671368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.671376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.674934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.684133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.684674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.684693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.684701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.684920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.685139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.685146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.685153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.688702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.698105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.698553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.698569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.698577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.698796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.699013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.699021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.699028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.702576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.711978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.712530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.712567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.712579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.712821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.713048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.713058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.713065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.716624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.725835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.726547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.726584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.726596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.726836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.727060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.727068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.727076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.730734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.739736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.740444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.740481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.740493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.740736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.740959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.740967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.740974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.744535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.753527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.754106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.754142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.754152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.754391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.754622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.754632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.754639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.758190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.767396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.768013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.768050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.768060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.768299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.768532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.768541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.768549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.772094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.781290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.781952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.781989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.989 [2024-06-07 16:38:45.782000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.989 [2024-06-07 16:38:45.782239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.989 [2024-06-07 16:38:45.782470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.989 [2024-06-07 16:38:45.782480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.989 [2024-06-07 16:38:45.782487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.989 [2024-06-07 16:38:45.786038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.989 [2024-06-07 16:38:45.795238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.989 [2024-06-07 16:38:45.795933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.989 [2024-06-07 16:38:45.795970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.990 [2024-06-07 16:38:45.795981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.990 [2024-06-07 16:38:45.796220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.990 [2024-06-07 16:38:45.796452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.990 [2024-06-07 16:38:45.796461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.990 [2024-06-07 16:38:45.796468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.990 [2024-06-07 16:38:45.800019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.990 [2024-06-07 16:38:45.809217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.990 [2024-06-07 16:38:45.809881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.990 [2024-06-07 16:38:45.809919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.990 [2024-06-07 16:38:45.809934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.990 [2024-06-07 16:38:45.810173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.990 [2024-06-07 16:38:45.810396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.990 [2024-06-07 16:38:45.810413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.990 [2024-06-07 16:38:45.810421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.990 [2024-06-07 16:38:45.813971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.990 [2024-06-07 16:38:45.823175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.990 [2024-06-07 16:38:45.823862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.990 [2024-06-07 16:38:45.823901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.990 [2024-06-07 16:38:45.823911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.990 [2024-06-07 16:38:45.824150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.990 [2024-06-07 16:38:45.824374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.990 [2024-06-07 16:38:45.824385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.990 [2024-06-07 16:38:45.824392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.990 [2024-06-07 16:38:45.827956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:18.990 [2024-06-07 16:38:45.837168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.990 [2024-06-07 16:38:45.837809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.990 [2024-06-07 16:38:45.837829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:18.990 [2024-06-07 16:38:45.837836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:18.990 [2024-06-07 16:38:45.838056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:18.990 [2024-06-07 16:38:45.838275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:18.990 [2024-06-07 16:38:45.838285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:18.990 [2024-06-07 16:38:45.838292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.252 [2024-06-07 16:38:45.841843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.252 [2024-06-07 16:38:45.851037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.252 [2024-06-07 16:38:45.851624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.252 [2024-06-07 16:38:45.851642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.252 [2024-06-07 16:38:45.851650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.252 [2024-06-07 16:38:45.851869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.252 [2024-06-07 16:38:45.852088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.252 [2024-06-07 16:38:45.852105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.252 [2024-06-07 16:38:45.852112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.252 [2024-06-07 16:38:45.855659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.252 [2024-06-07 16:38:45.864854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.252 [2024-06-07 16:38:45.865588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.252 [2024-06-07 16:38:45.865626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.252 [2024-06-07 16:38:45.865637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.252 [2024-06-07 16:38:45.865876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.252 [2024-06-07 16:38:45.866100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.252 [2024-06-07 16:38:45.866110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.252 [2024-06-07 16:38:45.866117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.869675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.878746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.879437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.879476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.879488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.879729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.879953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.879963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.879970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.883528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.892725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.893454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.893492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.893503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.893742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.893965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.893975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.893982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.897539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.906530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.907136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.907174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.907185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.907433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.907658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.907667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.907675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.911223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.920428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.921112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.921151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.921162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.921400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.921635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.921644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.921652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.925201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.934423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.935155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.935193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.935204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.935452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.935677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.935686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.935694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.939242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.948245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.948928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.948966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.948977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.949220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.949453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.949463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.949471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.953023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.962221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.962917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.962955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.962966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.963205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.963437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.963448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.963456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.967005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.976217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.976914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.976953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.976964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.977203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.977435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.977446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.977453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.981002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:45.990200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:45.990854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:45.990873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:45.990882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:45.991101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:45.991321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:45.991329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.253 [2024-06-07 16:38:45.991340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.253 [2024-06-07 16:38:45.994891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.253 [2024-06-07 16:38:46.004089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.253 [2024-06-07 16:38:46.004674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.253 [2024-06-07 16:38:46.004692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.253 [2024-06-07 16:38:46.004700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.253 [2024-06-07 16:38:46.004920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.253 [2024-06-07 16:38:46.005140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.253 [2024-06-07 16:38:46.005148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.005155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.008701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.017896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.018469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.018485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.018493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.018712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.018931] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.018939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.018947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.022492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.031695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.032293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.032331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.032342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.032594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.032819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.032828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.032836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.036383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.045594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.046310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.046353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.046366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.046614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.046839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.046849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.046856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.050408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.059407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.060045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.060065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.060073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.060294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.060520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.060531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.060538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.064083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.073308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.074012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.074051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.074063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.074304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.074535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.074546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.074554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.078101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.087115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.087709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.087729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.087737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.087956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.088182] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.088191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.088198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.254 [2024-06-07 16:38:46.091748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.254 [2024-06-07 16:38:46.100944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.254 [2024-06-07 16:38:46.101526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.254 [2024-06-07 16:38:46.101543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.254 [2024-06-07 16:38:46.101551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.254 [2024-06-07 16:38:46.101770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.254 [2024-06-07 16:38:46.101989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.254 [2024-06-07 16:38:46.101997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.254 [2024-06-07 16:38:46.102004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.516 [2024-06-07 16:38:46.105551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.516 [2024-06-07 16:38:46.114745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.516 [2024-06-07 16:38:46.115322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-06-07 16:38:46.115338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.516 [2024-06-07 16:38:46.115345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.516 [2024-06-07 16:38:46.115568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.516 [2024-06-07 16:38:46.115788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.516 [2024-06-07 16:38:46.115797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.516 [2024-06-07 16:38:46.115804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.516 [2024-06-07 16:38:46.119344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.516 [2024-06-07 16:38:46.128543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.516 [2024-06-07 16:38:46.129121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.516 [2024-06-07 16:38:46.129137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.129144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.129363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.129595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.129605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.129612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.133159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.142361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.142982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.142998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.143006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.143225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.143450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.143460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.143468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.147009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.156205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.156811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.156826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.156834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.157053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.157272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.157281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.157288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.160833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.170034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.170659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.170676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.170683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.170902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.171121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.171129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.171136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.174683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.183876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.184510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.184548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.184565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.184808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.185033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.185042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.185050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.188605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.197805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.198443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.198463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.198471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.198691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.198911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.198921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.198928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.202481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.211683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.212304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.212321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.212329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.212555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.212775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.212783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.212791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.216335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.225539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.226157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.226174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.226181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.226400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.226627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.226640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.226647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.230205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.239424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.239861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.239882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.239890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.240110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.240331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.240339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.240346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.243913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.253334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.253916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.253932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.517 [2024-06-07 16:38:46.253939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.517 [2024-06-07 16:38:46.254159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.517 [2024-06-07 16:38:46.254378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.517 [2024-06-07 16:38:46.254388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.517 [2024-06-07 16:38:46.254394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.517 [2024-06-07 16:38:46.257949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.517 [2024-06-07 16:38:46.267155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.517 [2024-06-07 16:38:46.267859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.517 [2024-06-07 16:38:46.267897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.267908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.268147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.268371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.268381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.268390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.271949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.280957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.281686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.281726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.281737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.281976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.282200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.282209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.282216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.285775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.294783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.295505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.295545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.295557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.295799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.296024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.296033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.296041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.299602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.308602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.309316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.309354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.309365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.309611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.309836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.309846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.309854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.313409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.322400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.323106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.323144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.323155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.323398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.323632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.323642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.323650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.327201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.336207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.336824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.336862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.336873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.337112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.337336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.337345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.337353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.340911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.350117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.350864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.350903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.350914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.351153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.351377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.351388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.351395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.518 [2024-06-07 16:38:46.354954] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.518 [2024-06-07 16:38:46.363957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.518 [2024-06-07 16:38:46.364709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.518 [2024-06-07 16:38:46.364748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.518 [2024-06-07 16:38:46.364760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.518 [2024-06-07 16:38:46.364999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.518 [2024-06-07 16:38:46.365223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.518 [2024-06-07 16:38:46.365233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.518 [2024-06-07 16:38:46.365245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.368805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.377811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.378446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.378465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.781 [2024-06-07 16:38:46.378473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.781 [2024-06-07 16:38:46.378693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.781 [2024-06-07 16:38:46.378912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.781 [2024-06-07 16:38:46.378922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.781 [2024-06-07 16:38:46.378930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.382477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.391676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.392293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.392310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.781 [2024-06-07 16:38:46.392318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.781 [2024-06-07 16:38:46.392543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.781 [2024-06-07 16:38:46.392763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.781 [2024-06-07 16:38:46.392773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.781 [2024-06-07 16:38:46.392780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.396318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.405516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.406136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.406152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.781 [2024-06-07 16:38:46.406160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.781 [2024-06-07 16:38:46.406378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.781 [2024-06-07 16:38:46.406603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.781 [2024-06-07 16:38:46.406613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.781 [2024-06-07 16:38:46.406620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.410160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.419355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.420028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.420071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.781 [2024-06-07 16:38:46.420083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.781 [2024-06-07 16:38:46.420322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.781 [2024-06-07 16:38:46.420555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.781 [2024-06-07 16:38:46.420565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.781 [2024-06-07 16:38:46.420573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.424120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.433340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.433975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.433995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.781 [2024-06-07 16:38:46.434003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.781 [2024-06-07 16:38:46.434223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.781 [2024-06-07 16:38:46.434447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.781 [2024-06-07 16:38:46.434457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.781 [2024-06-07 16:38:46.434464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.438008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.447207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.447780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.447818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.781 [2024-06-07 16:38:46.447829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.781 [2024-06-07 16:38:46.448068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.781 [2024-06-07 16:38:46.448291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.781 [2024-06-07 16:38:46.448302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.781 [2024-06-07 16:38:46.448309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.781 [2024-06-07 16:38:46.451870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.781 [2024-06-07 16:38:46.461071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.781 [2024-06-07 16:38:46.461691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.781 [2024-06-07 16:38:46.461712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.461720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.461940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.462165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.462174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.462181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.465727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.474922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.475547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.475586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.475598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.475838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.476062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.476072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.476079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.479634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.488842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.489450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.489470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.489478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.489698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.489917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.489927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.489934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.493483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.502680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.503398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.503444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.503455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.503694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.503918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.503928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.503936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.507495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.516715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.517311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.517330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.517338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.517564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.517784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.517793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.517801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.521344] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.530551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.531269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.531308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.531320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.531567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.531793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.531802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.531810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.535361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.544360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.545092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.545131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.545142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.545382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.545614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.545624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.545632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.549179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.558179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.558912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.558952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.558971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.559210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.559440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.559451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.559458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.563008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.572004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.572612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.572632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.572640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.572860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.573080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.573089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.573096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.576646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.585844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.586634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.586672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.586685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.586925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.782 [2024-06-07 16:38:46.587149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.782 [2024-06-07 16:38:46.587159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.782 [2024-06-07 16:38:46.587166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.782 [2024-06-07 16:38:46.590721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.782 [2024-06-07 16:38:46.599713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.782 [2024-06-07 16:38:46.600349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.782 [2024-06-07 16:38:46.600369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.782 [2024-06-07 16:38:46.600377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.782 [2024-06-07 16:38:46.600602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.783 [2024-06-07 16:38:46.600823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.783 [2024-06-07 16:38:46.600836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.783 [2024-06-07 16:38:46.600843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.783 [2024-06-07 16:38:46.604386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.783 [2024-06-07 16:38:46.613588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.783 [2024-06-07 16:38:46.614130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.783 [2024-06-07 16:38:46.614146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.783 [2024-06-07 16:38:46.614154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.783 [2024-06-07 16:38:46.614373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.783 [2024-06-07 16:38:46.614600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.783 [2024-06-07 16:38:46.614609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.783 [2024-06-07 16:38:46.614616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.783 [2024-06-07 16:38:46.618155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:19.783 [2024-06-07 16:38:46.627559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:19.783 [2024-06-07 16:38:46.628099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.783 [2024-06-07 16:38:46.628116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:19.783 [2024-06-07 16:38:46.628123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:19.783 [2024-06-07 16:38:46.628341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:19.783 [2024-06-07 16:38:46.628567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:19.783 [2024-06-07 16:38:46.628577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:19.783 [2024-06-07 16:38:46.628584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:19.783 [2024-06-07 16:38:46.632136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.045 [2024-06-07 16:38:46.641545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.045 [2024-06-07 16:38:46.642083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.045 [2024-06-07 16:38:46.642098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.045 [2024-06-07 16:38:46.642106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.045 [2024-06-07 16:38:46.642325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.045 [2024-06-07 16:38:46.642549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.045 [2024-06-07 16:38:46.642559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.045 [2024-06-07 16:38:46.642566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.045 [2024-06-07 16:38:46.646108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.045 [2024-06-07 16:38:46.655510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.045 [2024-06-07 16:38:46.656119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.045 [2024-06-07 16:38:46.656135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.045 [2024-06-07 16:38:46.656143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.045 [2024-06-07 16:38:46.656361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.045 [2024-06-07 16:38:46.656585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.045 [2024-06-07 16:38:46.656595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.045 [2024-06-07 16:38:46.656602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.045 [2024-06-07 16:38:46.660145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.045 [2024-06-07 16:38:46.669334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.045 [2024-06-07 16:38:46.669929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.045 [2024-06-07 16:38:46.669945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.045 [2024-06-07 16:38:46.669952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.045 [2024-06-07 16:38:46.670171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.045 [2024-06-07 16:38:46.670390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.670400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.670412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.673952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.683144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.683803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.683842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.683855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.684096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.684320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.684329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.684337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.687893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.697103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.697806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.697845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.697856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.698099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.698323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.698333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.698341] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.701898] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.711099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.711733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.711772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.711783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.712022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.712246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.712256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.712263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.715819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.725018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.725722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.725760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.725771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.726011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.726235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.726245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.726252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.729809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.738812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.739546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.739585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.739595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.739835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.740058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.740067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.740080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.743638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.752627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.753206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.753243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.753253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.753503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.753728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.753738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.753746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.757294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.766579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.767305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.767343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.767354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.767603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.767827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.767836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.767844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.771395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.780398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.781124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.781163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.781174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.781423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.781648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.781662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.781670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.785220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.794214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.794945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.794988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.794998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.795238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.795471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.795482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.795489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.046 [2024-06-07 16:38:46.799039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.046 [2024-06-07 16:38:46.808027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.046 [2024-06-07 16:38:46.808760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.046 [2024-06-07 16:38:46.808798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.046 [2024-06-07 16:38:46.808809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.046 [2024-06-07 16:38:46.809048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.046 [2024-06-07 16:38:46.809272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.046 [2024-06-07 16:38:46.809282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.046 [2024-06-07 16:38:46.809290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.812849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.047 [2024-06-07 16:38:46.821837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.047 [2024-06-07 16:38:46.822558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.047 [2024-06-07 16:38:46.822596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.047 [2024-06-07 16:38:46.822608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.047 [2024-06-07 16:38:46.822849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.047 [2024-06-07 16:38:46.823074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.047 [2024-06-07 16:38:46.823083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.047 [2024-06-07 16:38:46.823091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.826652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.047 [2024-06-07 16:38:46.835660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.047 [2024-06-07 16:38:46.836341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.047 [2024-06-07 16:38:46.836379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.047 [2024-06-07 16:38:46.836390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.047 [2024-06-07 16:38:46.836641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.047 [2024-06-07 16:38:46.836870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.047 [2024-06-07 16:38:46.836880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.047 [2024-06-07 16:38:46.836887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.840439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.047 [2024-06-07 16:38:46.849645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.047 [2024-06-07 16:38:46.850275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.047 [2024-06-07 16:38:46.850293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.047 [2024-06-07 16:38:46.850301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.047 [2024-06-07 16:38:46.850528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.047 [2024-06-07 16:38:46.850748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.047 [2024-06-07 16:38:46.850757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.047 [2024-06-07 16:38:46.850764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.854306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.047 [2024-06-07 16:38:46.863503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.047 [2024-06-07 16:38:46.864145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.047 [2024-06-07 16:38:46.864183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.047 [2024-06-07 16:38:46.864194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.047 [2024-06-07 16:38:46.864446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.047 [2024-06-07 16:38:46.864670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.047 [2024-06-07 16:38:46.864680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.047 [2024-06-07 16:38:46.864687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.868238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.047 [2024-06-07 16:38:46.877443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.047 [2024-06-07 16:38:46.878072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.047 [2024-06-07 16:38:46.878091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.047 [2024-06-07 16:38:46.878098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.047 [2024-06-07 16:38:46.878318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.047 [2024-06-07 16:38:46.878545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.047 [2024-06-07 16:38:46.878555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.047 [2024-06-07 16:38:46.878562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.882108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.047 [2024-06-07 16:38:46.891298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.047 [2024-06-07 16:38:46.891920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.047 [2024-06-07 16:38:46.891936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.047 [2024-06-07 16:38:46.891944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.047 [2024-06-07 16:38:46.892163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.047 [2024-06-07 16:38:46.892382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.047 [2024-06-07 16:38:46.892390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.047 [2024-06-07 16:38:46.892397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.047 [2024-06-07 16:38:46.895948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.905233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.905959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.905998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.906009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.906248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.906482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.906493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.906500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.910048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.919035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.919755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.919793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.919804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.920043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.920267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.920276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.920284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.923841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.932840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.933576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.933615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.933630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.933870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.934094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.934103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.934111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.937669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.946661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.947292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.947311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.947319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.947546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.947766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.947775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.947782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.951323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.960560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.961264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.961303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.961314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.961563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.961788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.961798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.961805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.965353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.974343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.974977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.974996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.975004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.975224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.975451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.975465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.975472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.979015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:46.988215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:46.988822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:46.988839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:46.988847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:46.989066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:46.989285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:46.989293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:46.989301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:46.992851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:47.002037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:47.002655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:47.002671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.310 [2024-06-07 16:38:47.002679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.310 [2024-06-07 16:38:47.002898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.310 [2024-06-07 16:38:47.003117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.310 [2024-06-07 16:38:47.003126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.310 [2024-06-07 16:38:47.003133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.310 [2024-06-07 16:38:47.006675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.310 [2024-06-07 16:38:47.015861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.310 [2024-06-07 16:38:47.016599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.310 [2024-06-07 16:38:47.016638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.016649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.016888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.017112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.017122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.017129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.020690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.029692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.030424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.030463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.030474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.030713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.030937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.030947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.030955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.034511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.043501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.044223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.044261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.044272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.044521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.044747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.044756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.044763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.048311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.057303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.058017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.058055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.058065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.058304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.058538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.058549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.058556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.062103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.071096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.071821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.071860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.071870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.072114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.072338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.072347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.072355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.075916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.084906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.085642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.085681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.085692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.085931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.086155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.086164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.086172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.089730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.098722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.099416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.099453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.099465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.099706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.099930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.099938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.099946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.103497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.112692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.113448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.113487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.113499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.113740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.113964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.113974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.113986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.117546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.126535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.127224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.127261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.127272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.127521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.127747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.127757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.127765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.131329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.140322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.140910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.140929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.140937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.311 [2024-06-07 16:38:47.141157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.311 [2024-06-07 16:38:47.141377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.311 [2024-06-07 16:38:47.141385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.311 [2024-06-07 16:38:47.141392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.311 [2024-06-07 16:38:47.144945] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.311 [2024-06-07 16:38:47.154139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.311 [2024-06-07 16:38:47.154756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.311 [2024-06-07 16:38:47.154774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.311 [2024-06-07 16:38:47.154781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.312 [2024-06-07 16:38:47.155001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.312 [2024-06-07 16:38:47.155220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.312 [2024-06-07 16:38:47.155228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.312 [2024-06-07 16:38:47.155235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.312 [2024-06-07 16:38:47.158785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.167992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.168609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.168652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.168664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.168903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.169127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.169136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.169143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.172701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.181899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.182624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.182663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.182673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.182912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.183137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.183146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.183154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.186713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.195715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.196408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.196446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.196457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.196696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.196920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.196929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.196937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.200489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.209685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.210398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.210444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.210455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.210694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.210922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.210932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.210940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.214497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.223548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.224257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.224295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.224306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.224555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.224780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.224790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.224798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.228350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.237368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.238084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.238123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.238134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.238373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.238607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.238618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.238626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.242175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.251179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.251668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.251687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.251696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.251916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.252136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.252146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.252153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.255717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.265138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.265770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.265788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.265795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.266014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.266234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.266241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.574 [2024-06-07 16:38:47.266248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.574 [2024-06-07 16:38:47.269799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.574 [2024-06-07 16:38:47.279010] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.574 [2024-06-07 16:38:47.279689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.574 [2024-06-07 16:38:47.279728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.574 [2024-06-07 16:38:47.279739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.574 [2024-06-07 16:38:47.279979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.574 [2024-06-07 16:38:47.280202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.574 [2024-06-07 16:38:47.280211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.280219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.283780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.292979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.293699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.293738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.293748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.293988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.294211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.294221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.294229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.297786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.306782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.307506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.307544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.307561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.307802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.308025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.308034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.308042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.311604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.320600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.321325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.321363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.321374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.321622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.321847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.321856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.321864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.325411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.334417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.335087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.335126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.335137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.335376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.335608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.335618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.335626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.339176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.348382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.348967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.349005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.349016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.349255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.349489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.349503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.349511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.353061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.362259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.362972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.363010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.363021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.363261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.363494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.363504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.363512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.367065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.376057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.376721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.376759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.376770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.377009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.377233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.377242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.377250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.380811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.390027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.390766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.390805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.390816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.391055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.391279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.391288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.391296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.394854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.403854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.404504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.404543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.404555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.404798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.405022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.405032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.405040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.408597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.575 [2024-06-07 16:38:47.417799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.575 [2024-06-07 16:38:47.418493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.575 [2024-06-07 16:38:47.418532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.575 [2024-06-07 16:38:47.418544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.575 [2024-06-07 16:38:47.418787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.575 [2024-06-07 16:38:47.419010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.575 [2024-06-07 16:38:47.419020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.575 [2024-06-07 16:38:47.419027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.575 [2024-06-07 16:38:47.422588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.431804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.432502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.432541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.432553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.432795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.433019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.433028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.433036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.436598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.445797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.446501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.446539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.446552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.446797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.447021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.447031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.447038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.450599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.459594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.460298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.460336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.460347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.460596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.460821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.460830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.460838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.464388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.473381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.474115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.474154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.474164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.474412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.474637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.474646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.474654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.478202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.487198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.487883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.487921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.487932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.488171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.488395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.488417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.488429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.491978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.501173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.501846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.501884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.501895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.502134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.502358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.502367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.502375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.505931] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.515323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.515963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.515983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.515991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.516212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.516436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.516446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.516453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.519998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.529199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.529787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.529803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.529811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.530030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.530249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.530257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.530264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.533818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.838 [2024-06-07 16:38:47.543012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.838 [2024-06-07 16:38:47.543600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.838 [2024-06-07 16:38:47.543621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.838 [2024-06-07 16:38:47.543629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.838 [2024-06-07 16:38:47.543848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.838 [2024-06-07 16:38:47.544068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.838 [2024-06-07 16:38:47.544076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.838 [2024-06-07 16:38:47.544083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.838 [2024-06-07 16:38:47.547630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.556821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.557482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.557521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.557533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.557774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.557998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.558007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.558015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.561574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.570772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.571495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.571533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.571545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.571786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.572010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.572019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.572027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.575585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.584570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.585282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.585320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.585331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.585580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.585809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.585819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.585827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.589374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.598373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.599077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.599116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.599128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.599368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.599600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.599610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.599617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.603169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.612171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.612864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.612902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.612913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.613152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.613376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.613386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.613393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.616949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.626155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.626794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.626814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.626822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.627042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.627262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.627271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.627278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.630838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.640039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.640542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.640559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.640567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.640786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.641006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.641014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.641021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.644567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.653970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.654684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.654723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.654735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.654976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.655200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.655209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.655217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.658775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.667768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.668421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.668459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.668472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.668714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.668938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.668947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.668955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.672512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 [2024-06-07 16:38:47.681708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.839 [2024-06-07 16:38:47.682391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.839 [2024-06-07 16:38:47.682436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:20.839 [2024-06-07 16:38:47.682451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:20.839 [2024-06-07 16:38:47.682690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:20.839 [2024-06-07 16:38:47.682914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:20.839 [2024-06-07 16:38:47.682923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:20.839 [2024-06-07 16:38:47.682931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.839 [2024-06-07 16:38:47.686483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3292406 Killed "${NVMF_APP[@]}" "$@" 00:30:20.839 16:38:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:20.839 16:38:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:20.840 16:38:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:20.840 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:20.840 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.102 [2024-06-07 16:38:47.695684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.102 [2024-06-07 16:38:47.696323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.102 [2024-06-07 16:38:47.696362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.102 [2024-06-07 16:38:47.696374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.102 [2024-06-07 16:38:47.696626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3293995 00:30:21.102 [2024-06-07 16:38:47.696851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.102 [2024-06-07 16:38:47.696861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.102 [2024-06-07 16:38:47.696868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3293995 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 3293995 ']' 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:21.102 16:38:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.102 [2024-06-07 16:38:47.700424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.102 [2024-06-07 16:38:47.709636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.102 [2024-06-07 16:38:47.710275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.102 [2024-06-07 16:38:47.710295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.102 [2024-06-07 16:38:47.710311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.102 [2024-06-07 16:38:47.710540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.102 [2024-06-07 16:38:47.710762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.102 [2024-06-07 16:38:47.710773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.102 [2024-06-07 16:38:47.710781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.102 [2024-06-07 16:38:47.714326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.102 [2024-06-07 16:38:47.723534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.102 [2024-06-07 16:38:47.724139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.102 [2024-06-07 16:38:47.724178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.102 [2024-06-07 16:38:47.724189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.102 [2024-06-07 16:38:47.724436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.102 [2024-06-07 16:38:47.724662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.102 [2024-06-07 16:38:47.724671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.102 [2024-06-07 16:38:47.724679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.102 [2024-06-07 16:38:47.728231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.102 [2024-06-07 16:38:47.737447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.102 [2024-06-07 16:38:47.738181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.102 [2024-06-07 16:38:47.738220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.102 [2024-06-07 16:38:47.738231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.102 [2024-06-07 16:38:47.738477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.102 [2024-06-07 16:38:47.738702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.102 [2024-06-07 16:38:47.738712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.102 [2024-06-07 16:38:47.738720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.102 [2024-06-07 16:38:47.742269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.102 [2024-06-07 16:38:47.745458] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:21.102 [2024-06-07 16:38:47.745503] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.102 [2024-06-07 16:38:47.751263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.102 [2024-06-07 16:38:47.751913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.102 [2024-06-07 16:38:47.751933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.102 [2024-06-07 16:38:47.751941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.102 [2024-06-07 16:38:47.752166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.102 [2024-06-07 16:38:47.752386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.102 [2024-06-07 16:38:47.752396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.102 [2024-06-07 16:38:47.752409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.102 [2024-06-07 16:38:47.755952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.102 [2024-06-07 16:38:47.765152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.102 [2024-06-07 16:38:47.765844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.102 [2024-06-07 16:38:47.765882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.102 [2024-06-07 16:38:47.765894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.102 [2024-06-07 16:38:47.766133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.102 [2024-06-07 16:38:47.766357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.102 [2024-06-07 16:38:47.766366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.102 [2024-06-07 16:38:47.766374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.102 [2024-06-07 16:38:47.769932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.102 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.103 [2024-06-07 16:38:47.779142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.779834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.779873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.779885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.780124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.780348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.780358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.780365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.783926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.793129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.793822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.793860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.793871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.794110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.794334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.794344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.794355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.797994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.807001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.807714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.807752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.807763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.808002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.808226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.808235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.808243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.811799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.821003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.821726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.821764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.821775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.822014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.822238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.822248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.822256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.825816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.828041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:21.103 [2024-06-07 16:38:47.834838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.835647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.835686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.835699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.835939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.836164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.836174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.836182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.839742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.848739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.849376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.849396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.849409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.849630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.849850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.849859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.849867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.853434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.862684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.863385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.863433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.863446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.863687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.863910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.863920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.863928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.867485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.876487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.877224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.877264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.877276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.877523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.877748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.877758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.877765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.881316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.882337] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.103 [2024-06-07 16:38:47.882361] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.103 [2024-06-07 16:38:47.882367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.103 [2024-06-07 16:38:47.882372] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.103 [2024-06-07 16:38:47.882377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.103 [2024-06-07 16:38:47.882420] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.103 [2024-06-07 16:38:47.882524] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.103 [2024-06-07 16:38:47.882526] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.103 [2024-06-07 16:38:47.890313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.890958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.890979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.103 [2024-06-07 16:38:47.890988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.103 [2024-06-07 16:38:47.891207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.103 [2024-06-07 16:38:47.891433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.103 [2024-06-07 16:38:47.891443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.103 [2024-06-07 16:38:47.891451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.103 [2024-06-07 16:38:47.894994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.103 [2024-06-07 16:38:47.904202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.103 [2024-06-07 16:38:47.904806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.103 [2024-06-07 16:38:47.904850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.104 [2024-06-07 16:38:47.904861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.104 [2024-06-07 16:38:47.905103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.104 [2024-06-07 16:38:47.905328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.104 [2024-06-07 16:38:47.905338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.104 [2024-06-07 16:38:47.905346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.104 [2024-06-07 16:38:47.908912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.104 [2024-06-07 16:38:47.918120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.104 [2024-06-07 16:38:47.918770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.104 [2024-06-07 16:38:47.918811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.104 [2024-06-07 16:38:47.918822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.104 [2024-06-07 16:38:47.919063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.104 [2024-06-07 16:38:47.919287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.104 [2024-06-07 16:38:47.919297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.104 [2024-06-07 16:38:47.919305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.104 [2024-06-07 16:38:47.922864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.104 [2024-06-07 16:38:47.932084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.104 [2024-06-07 16:38:47.932809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.104 [2024-06-07 16:38:47.932847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.104 [2024-06-07 16:38:47.932858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.104 [2024-06-07 16:38:47.933098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.104 [2024-06-07 16:38:47.933322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.104 [2024-06-07 16:38:47.933332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.104 [2024-06-07 16:38:47.933340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.104 [2024-06-07 16:38:47.936900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.104 [2024-06-07 16:38:47.945896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.104 [2024-06-07 16:38:47.946701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.104 [2024-06-07 16:38:47.946739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.104 [2024-06-07 16:38:47.946750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.104 [2024-06-07 16:38:47.946990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.104 [2024-06-07 16:38:47.947214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.104 [2024-06-07 16:38:47.947223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.104 [2024-06-07 16:38:47.947231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.104 [2024-06-07 16:38:47.950791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.366 [2024-06-07 16:38:47.959787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.366 [2024-06-07 16:38:47.960386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-06-07 16:38:47.960410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.366 [2024-06-07 16:38:47.960418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.366 [2024-06-07 16:38:47.960638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.366 [2024-06-07 16:38:47.960857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.366 [2024-06-07 16:38:47.960868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.366 [2024-06-07 16:38:47.960875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.366 [2024-06-07 16:38:47.964424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.366 [2024-06-07 16:38:47.973626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.366 [2024-06-07 16:38:47.974361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-06-07 16:38:47.974400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.366 [2024-06-07 16:38:47.974421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.366 [2024-06-07 16:38:47.974671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.366 [2024-06-07 16:38:47.974894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.366 [2024-06-07 16:38:47.974904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.366 [2024-06-07 16:38:47.974912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.366 [2024-06-07 16:38:47.978465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.366 [2024-06-07 16:38:47.987456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.366 [2024-06-07 16:38:47.988095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-06-07 16:38:47.988114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.366 [2024-06-07 16:38:47.988122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.366 [2024-06-07 16:38:47.988343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:47.988569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:47.988578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:47.988586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:47.992129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.001331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.002005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.002044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.002055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.002294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.002525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.002536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.002543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.006092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.015300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.015920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.015941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.015949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.016169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.016388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.016397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.016419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.019964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.029167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.029900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.029939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.029950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.030189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.030422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.030432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.030440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.034001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.042998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.043712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.043753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.043764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.044003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.044228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.044237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.044245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.047801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.056800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.057349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.057388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.057408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.057650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.057874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.057884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.057891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.061445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.070647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.071383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.071434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.071445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.071685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.071909] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.071919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.071927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.075481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.084483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.085161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.085181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.085189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.085413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.085633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.085643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.085650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.089193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.098395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.099131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.099170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.099180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.099428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.099652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.099662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.099670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.103220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.112221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.112865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.112884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.112892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.113112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.113337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.113346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.367 [2024-06-07 16:38:48.113353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.367 [2024-06-07 16:38:48.116908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.367 [2024-06-07 16:38:48.126118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.367 [2024-06-07 16:38:48.126812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-06-07 16:38:48.126850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.367 [2024-06-07 16:38:48.126862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.367 [2024-06-07 16:38:48.127101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.367 [2024-06-07 16:38:48.127325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.367 [2024-06-07 16:38:48.127335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.127342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.130902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.368 [2024-06-07 16:38:48.140116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.368 [2024-06-07 16:38:48.140822] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-06-07 16:38:48.140861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.368 [2024-06-07 16:38:48.140872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.368 [2024-06-07 16:38:48.141111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.368 [2024-06-07 16:38:48.141335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.368 [2024-06-07 16:38:48.141344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.141352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.144906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.368 [2024-06-07 16:38:48.154113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.368 [2024-06-07 16:38:48.154857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-06-07 16:38:48.154897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.368 [2024-06-07 16:38:48.154908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.368 [2024-06-07 16:38:48.155147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.368 [2024-06-07 16:38:48.155372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.368 [2024-06-07 16:38:48.155381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.155389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.158948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.368 [2024-06-07 16:38:48.167953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.368 [2024-06-07 16:38:48.168512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-06-07 16:38:48.168551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.368 [2024-06-07 16:38:48.168563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.368 [2024-06-07 16:38:48.168804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.368 [2024-06-07 16:38:48.169028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.368 [2024-06-07 16:38:48.169038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.169046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.172606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.368 [2024-06-07 16:38:48.181819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.368 [2024-06-07 16:38:48.182416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-06-07 16:38:48.182436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.368 [2024-06-07 16:38:48.182444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.368 [2024-06-07 16:38:48.182664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.368 [2024-06-07 16:38:48.182883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.368 [2024-06-07 16:38:48.182892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.182899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.186446] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.368 [2024-06-07 16:38:48.195645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.368 [2024-06-07 16:38:48.196237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-06-07 16:38:48.196253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.368 [2024-06-07 16:38:48.196261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.368 [2024-06-07 16:38:48.196485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.368 [2024-06-07 16:38:48.196704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.368 [2024-06-07 16:38:48.196714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.196721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.200263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.368 [2024-06-07 16:38:48.209469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.368 [2024-06-07 16:38:48.210201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-06-07 16:38:48.210240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.368 [2024-06-07 16:38:48.210255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.368 [2024-06-07 16:38:48.210503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.368 [2024-06-07 16:38:48.210727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.368 [2024-06-07 16:38:48.210737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.368 [2024-06-07 16:38:48.210745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.368 [2024-06-07 16:38:48.214295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.223301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.224003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.224042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.224053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.224292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.224522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.224544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.224552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.228103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.237113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.237733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.237753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.237761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.237980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.238199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.238209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.238216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.241765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.250967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.251533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.251572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.251585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.251826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.252050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.252064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.252072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.255630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.264835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.265457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.265482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.265492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.265716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.265936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.265947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.265954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.269502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.278699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.279274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.279312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.279323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.279571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.279797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.279807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.279815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.283363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.292573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.293312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.293350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.293362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.293611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.293835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.293845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.293853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.297408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.306400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.307147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.307185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.307196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.307444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.307668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.630 [2024-06-07 16:38:48.307678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.630 [2024-06-07 16:38:48.307686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.630 [2024-06-07 16:38:48.311239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.630 [2024-06-07 16:38:48.320240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.630 [2024-06-07 16:38:48.320878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.630 [2024-06-07 16:38:48.320898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.630 [2024-06-07 16:38:48.320906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.630 [2024-06-07 16:38:48.321125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.630 [2024-06-07 16:38:48.321345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.321354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.321361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.324911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.334127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.334730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.334748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.334755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.334975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.335194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.335202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.335209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.338755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.347951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.348663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.348702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.348713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.348957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.349181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.349190] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.349198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.352755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.361755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.362360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.362379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.362387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.362612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.362833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.362842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.362849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.366392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.375594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.376009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.376025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.376033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.376252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.376478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.376488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.376495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.380037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.389448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.390000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.390038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.390051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.390292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.390523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.390533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.390545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.394095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.403303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.403729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.403748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.403756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.403976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.404198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.404208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.404215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.407765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.417177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.417835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.417853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.417860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.418079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.418298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.418307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.418314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.421864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.431066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.431766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.431805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.431816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.432055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.432279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.432289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.432297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.435863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.444857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.445519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.445543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.445552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.445772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.445992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.446000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.446007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.449554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.631 [2024-06-07 16:38:48.458752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.631 [2024-06-07 16:38:48.459494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.631 [2024-06-07 16:38:48.459533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.631 [2024-06-07 16:38:48.459545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.631 [2024-06-07 16:38:48.459786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.631 [2024-06-07 16:38:48.460010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.631 [2024-06-07 16:38:48.460020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.631 [2024-06-07 16:38:48.460028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.631 [2024-06-07 16:38:48.463589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.632 [2024-06-07 16:38:48.472593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.632 [2024-06-07 16:38:48.473190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.632 [2024-06-07 16:38:48.473229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.632 [2024-06-07 16:38:48.473240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.632 [2024-06-07 16:38:48.473486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.632 [2024-06-07 16:38:48.473711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.632 [2024-06-07 16:38:48.473721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.632 [2024-06-07 16:38:48.473729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.632 [2024-06-07 16:38:48.477275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.895 [2024-06-07 16:38:48.486485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.895 [2024-06-07 16:38:48.487089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-06-07 16:38:48.487108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.895 [2024-06-07 16:38:48.487116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.895 [2024-06-07 16:38:48.487336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.895 [2024-06-07 16:38:48.487567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.895 [2024-06-07 16:38:48.487577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.895 [2024-06-07 16:38:48.487584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.895 [2024-06-07 16:38:48.491127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.895 [2024-06-07 16:38:48.500325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.895 [2024-06-07 16:38:48.501041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-06-07 16:38:48.501079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.895 [2024-06-07 16:38:48.501091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.895 [2024-06-07 16:38:48.501330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.895 [2024-06-07 16:38:48.501561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.895 [2024-06-07 16:38:48.501572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.895 [2024-06-07 16:38:48.501579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:21.895 [2024-06-07 16:38:48.505132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.895 [2024-06-07 16:38:48.514362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.895 [2024-06-07 16:38:48.515118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-06-07 16:38:48.515157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.895 [2024-06-07 16:38:48.515168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.895 [2024-06-07 16:38:48.515416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.895 [2024-06-07 16:38:48.515640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.895 [2024-06-07 16:38:48.515651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.895 [2024-06-07 16:38:48.515658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.895 [2024-06-07 16:38:48.519208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.895 [2024-06-07 16:38:48.528203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.895 [2024-06-07 16:38:48.528807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-06-07 16:38:48.528846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.895 [2024-06-07 16:38:48.528857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.895 [2024-06-07 16:38:48.529096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.895 [2024-06-07 16:38:48.529320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.895 [2024-06-07 16:38:48.529335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.895 [2024-06-07 16:38:48.529343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.895 [2024-06-07 16:38:48.532910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.895 [2024-06-07 16:38:48.542118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.895 [2024-06-07 16:38:48.542825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.895 [2024-06-07 16:38:48.542864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.895 [2024-06-07 16:38:48.542875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.895 [2024-06-07 16:38:48.543114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.895 [2024-06-07 16:38:48.543339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.895 [2024-06-07 16:38:48.543348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.895 [2024-06-07 16:38:48.543356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.895 [2024-06-07 16:38:48.546913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.895 [2024-06-07 16:38:48.548979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.895 16:38:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.896 [2024-06-07 16:38:48.556116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.896 [2024-06-07 16:38:48.556834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-06-07 16:38:48.556873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.896 [2024-06-07 16:38:48.556884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.896 [2024-06-07 16:38:48.557123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.896 [2024-06-07 16:38:48.557347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.896 [2024-06-07 16:38:48.557357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.896 [2024-06-07 16:38:48.557364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.896 [2024-06-07 16:38:48.560925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.896 [2024-06-07 16:38:48.569917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.896 [2024-06-07 16:38:48.570253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-06-07 16:38:48.570272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.896 [2024-06-07 16:38:48.570285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.896 [2024-06-07 16:38:48.570511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.896 [2024-06-07 16:38:48.570732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.896 [2024-06-07 16:38:48.570741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.896 [2024-06-07 16:38:48.570748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.896 [2024-06-07 16:38:48.574290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.896 Malloc0 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.896 [2024-06-07 16:38:48.583908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.896 [2024-06-07 16:38:48.584643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-06-07 16:38:48.584682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.896 [2024-06-07 16:38:48.584693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.896 [2024-06-07 16:38:48.584932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.896 [2024-06-07 16:38:48.585156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.896 [2024-06-07 16:38:48.585165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.896 [2024-06-07 16:38:48.585173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.896 [2024-06-07 16:38:48.588729] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.896 [2024-06-07 16:38:48.597724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.896 [2024-06-07 16:38:48.598317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.896 [2024-06-07 16:38:48.598355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2840 with addr=10.0.0.2, port=4420 00:30:21.896 [2024-06-07 16:38:48.598367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2840 is same with the state(5) to be set 00:30:21.896 [2024-06-07 16:38:48.598616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2840 (9): Bad file descriptor 00:30:21.896 [2024-06-07 16:38:48.598841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.896 [2024-06-07 16:38:48.598850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.896 [2024-06-07 16:38:48.598858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.896 [2024-06-07 16:38:48.602409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.896 [2024-06-07 16:38:48.610907] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.896 [2024-06-07 16:38:48.611624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.896 16:38:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3292971 00:30:22.157 [2024-06-07 16:38:48.781741] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.299 00:30:30.299 Latency(us) 00:30:30.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:30.299 Verification LBA range: start 0x0 length 0x4000 00:30:30.299 Nvme1n1 : 15.00 8076.68 31.55 9950.04 0.00 7073.78 1071.79 18677.76 00:30:30.299 =================================================================================================================== 00:30:30.299 Total : 8076.68 31.55 9950.04 0.00 7073.78 1071.79 18677.76 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.590 rmmod nvme_tcp 00:30:30.590 rmmod nvme_fabrics 00:30:30.590 rmmod nvme_keyring 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3293995 ']' 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3293995 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 3293995 ']' 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 3293995 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3293995 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3293995' 00:30:30.590 killing process with pid 3293995 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 3293995 00:30:30.590 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 3293995 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.851 16:38:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.768 16:38:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:32.768 00:30:32.768 real 0m27.556s 00:30:32.768 user 1m2.738s 00:30:32.768 sys 0m6.960s 00:30:32.768 16:38:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:32.768 16:38:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.768 ************************************ 00:30:32.768 END TEST nvmf_bdevperf 00:30:32.768 ************************************ 00:30:32.768 16:38:59 nvmf_tcp -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:32.768 16:38:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:32.768 16:38:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:32.768 16:38:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.029 ************************************ 00:30:33.029 START TEST nvmf_target_disconnect 00:30:33.029 ************************************ 00:30:33.029 16:38:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:33.029 * Looking for test storage... 00:30:33.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:33.030 16:38:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:41.175 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:41.175 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:41.175 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:41.175 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:41.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:30:41.175 00:30:41.175 --- 10.0.0.2 ping statistics --- 00:30:41.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.175 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:30:41.175 00:30:41.175 --- 10.0.0.1 ping statistics --- 00:30:41.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.175 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:41.175 16:39:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 ************************************ 00:30:41.176 START TEST nvmf_target_disconnect_tc1 00:30:41.176 ************************************ 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:41.176 16:39:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.176 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.176 [2024-06-07 16:39:07.055056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.176 [2024-06-07 16:39:07.055117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc21d0 with addr=10.0.0.2, port=4420 00:30:41.176 [2024-06-07 16:39:07.055139] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:41.176 [2024-06-07 16:39:07.055150] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:41.176 [2024-06-07 16:39:07.055157] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:41.176 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:41.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:41.176 Initializing NVMe Controllers 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:41.176 00:30:41.176 real 0m0.115s 00:30:41.176 user 0m0.051s 00:30:41.176 sys 0m0.063s 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 ************************************ 00:30:41.176 END TEST nvmf_target_disconnect_tc1 00:30:41.176 ************************************ 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 ************************************ 00:30:41.176 START TEST nvmf_target_disconnect_tc2 00:30:41.176 ************************************ 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3300132 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3300132 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3300132 ']' 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:41.176 16:39:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 [2024-06-07 16:39:07.211724] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:41.176 [2024-06-07 16:39:07.211780] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.176 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.176 [2024-06-07 16:39:07.299350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.176 [2024-06-07 16:39:07.392644] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.176 [2024-06-07 16:39:07.392701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.176 [2024-06-07 16:39:07.392709] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.176 [2024-06-07 16:39:07.392716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.176 [2024-06-07 16:39:07.392722] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.176 [2024-06-07 16:39:07.392891] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:30:41.176 [2024-06-07 16:39:07.393051] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:30:41.176 [2024-06-07 16:39:07.393210] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:30:41.176 [2024-06-07 16:39:07.393211] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:30:41.176 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:41.176 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:30:41.176 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:41.176 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:41.176 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 Malloc0 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 [2024-06-07 16:39:08.078277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 [2024-06-07 16:39:08.118643] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3300385 00:30:41.438 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:41.439 16:39:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.439 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.354 16:39:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3300132 00:30:43.354 16:39:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Write completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 Read completed with error (sct=0, sc=8) 00:30:43.354 starting I/O failed 00:30:43.354 [2024-06-07 16:39:10.151848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.354 [2024-06-07 16:39:10.152206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.152226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.152678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.152715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.153155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.153170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.153630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.153668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.153992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.154006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.154252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.154264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.154698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.154734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.155014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.155028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.155340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.155352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.155617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.155629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.155916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.155928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.156265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.156277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.156443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.156454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.156751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.156763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.157164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.157175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.157525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.157537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.157904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.157915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.158288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.158300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.158655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.158667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.159033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.159044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.159445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.159456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.159635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.159647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.159997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.354 [2024-06-07 16:39:10.160009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.354 qpair failed and we were unable to recover it. 00:30:43.354 [2024-06-07 16:39:10.160408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.160420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.160794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.160805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.161198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.161210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.161622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.161632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.162025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.162036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.162493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.162504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.162883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.162893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.163330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.163341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.163717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.163730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.164011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.164021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.164386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.164397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.164823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.164834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.165214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.165225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.165710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.165748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.166068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.166081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.166436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.166448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.166806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.166817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.167157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.167168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.167564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.167575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.167888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.167899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.168215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.168226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.168613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.168623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.169025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.169036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.169435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.169446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.169727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.169737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.170066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.170076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.170307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.170317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.170660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.170672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.171056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.171067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.171462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.171473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.171741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.171752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.171986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.171996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.172373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.172384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.172781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.172792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.173164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.173175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.173566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.173577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.173971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.173982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.355 [2024-06-07 16:39:10.174329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.355 [2024-06-07 16:39:10.174340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.355 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.174699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.174710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.175104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.175114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.175387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.175398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.175641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.175651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.176045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.176056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.176462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.176472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.176856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.176866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.177185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.177195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.177563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.177574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.177929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.177939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.178289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.178305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.178688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.178699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.179107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.179120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.179518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.179532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.179943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.179957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.180312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.180324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.180788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.180801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.181154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.181167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.181537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.181551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.181921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.181934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.182191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.182204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.182593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.182607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.182987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.183000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.183412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.183425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.183810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.183823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.184218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.184230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.184662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.184706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.185097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.185113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.185310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.185324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.185674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.185689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.186020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.186033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.186372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.186385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.186742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.186755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.187092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.187105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.187479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.187493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.187833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.187846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 [2024-06-07 16:39:10.188185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.356 [2024-06-07 16:39:10.188199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:43.356 qpair failed and we were unable to recover it. 00:30:43.356 Read completed with error (sct=0, sc=8) 00:30:43.356 starting I/O failed 00:30:43.356 Read completed with error (sct=0, sc=8) 00:30:43.356 starting I/O failed 00:30:43.356 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Read completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 Write completed with error (sct=0, sc=8) 00:30:43.357 starting I/O failed 00:30:43.357 [2024-06-07 16:39:10.188391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.357 [2024-06-07 16:39:10.188791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.188803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.189191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.189201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.189506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.189517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.189760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.189767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.190150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.190158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.190654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.190683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.191035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.191045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.191624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.191653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.192045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.192054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.192622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.192651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.192917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.192926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.193321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.193329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.193881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.193896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.194262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.194270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.194633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.194643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.194965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.194974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.195193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.195200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.195577] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.195587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.195955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.195964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.196345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.196353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.196767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.196776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.197120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.197128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.197458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.197466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.197802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.197813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.198155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.198163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.198421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.198429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.198775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.198783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.357 [2024-06-07 16:39:10.198969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.357 [2024-06-07 16:39:10.198978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.357 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.199361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.199369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.199601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.199608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.199861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.199870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.200234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.200242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.200520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.200528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.200881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.200896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.201281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.201289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.201713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.201721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.201991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.202000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.202357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.202365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.202549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.202558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.202858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.202865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.203224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.203232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.203526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.203534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.203875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.203882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.204241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.204249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.358 [2024-06-07 16:39:10.204481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.358 [2024-06-07 16:39:10.204489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.358 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.204933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.204942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.205296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.205305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.205588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.205596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.205848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.205856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.206216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.206225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.206607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.206616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.206986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.206993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.207377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.207385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.207764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.207772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.208155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.208163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.208569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.208578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.208935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.208943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.209216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.209224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.209599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.209608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.209987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.209995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.210383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.210391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.210753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.210762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.211123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.211130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.211360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.211368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.211756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.211764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.212095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.212103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.212487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.212495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.212828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.212836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.213040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.213048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.213289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.213297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.213538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.213547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.213938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.213947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.214302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.214310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.214682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.214695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.215097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.215105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.215488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.638 [2024-06-07 16:39:10.215496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.638 qpair failed and we were unable to recover it. 00:30:43.638 [2024-06-07 16:39:10.215755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.215764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.216113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.216122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.216522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.216530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.216986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.216995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.217372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.217379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.217666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.217674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.218068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.218077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.218435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.218443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.218811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.218820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.219173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.219181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.219533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.219542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.219930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.219938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.220302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.220311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.220587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.220596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.220984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.220992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.221363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.221372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.221739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.221747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.222139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.222151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.222510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.222520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.222910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.222918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.223113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.223122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.223508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.223517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.223852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.223861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.224210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.224218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.224475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.224483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.224779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.224787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.225113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.225121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.225486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.225495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.225883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.225891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.226277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.226284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.226747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.226756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.227036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.227045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.227430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.227438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.227684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.227694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.228034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.228042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.228404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.228412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.228770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.228778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.229196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.229205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.639 qpair failed and we were unable to recover it. 00:30:43.639 [2024-06-07 16:39:10.229479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.639 [2024-06-07 16:39:10.229487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.229802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.229810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.230160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.230168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.230455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.230462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.230914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.230922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.231289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.231297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.231687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.231695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.232052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.232060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.232360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.232368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.232751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.232759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.233008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.233018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.233389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.233397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.233854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.233866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.234226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.234234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.234520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.234530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.234916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.234925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.235299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.235308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.235684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.235692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.236083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.236092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.236463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.236472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.236877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.236885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.237277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.237287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.237563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.237570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.237943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.237951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.238341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.238350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.238720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.238728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.239095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.239104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.239469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.239478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.239747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.239755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.240130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.240137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.240373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.240383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.240672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.240680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.241051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.241059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.241461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.241469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.241728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.241736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.241966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.241974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.242324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.242332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.242710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.242718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.640 [2024-06-07 16:39:10.243088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.640 [2024-06-07 16:39:10.243097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.640 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.243485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.243496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.243891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.243900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.244268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.244277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.244556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.244565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.244995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.245003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.245349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.245356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.245758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.245767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.245958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.245966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.246180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.246189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.246541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.246549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.246917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.246925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.247304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.247311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.247741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.247750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.248163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.248171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.248442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.248450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.248734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.248745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.249133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.249141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.249545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.249553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.249920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.249928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.250280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.250288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.250662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.250671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.251050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.251058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.251423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.251431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.251777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.251785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.252140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.252148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.252425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.252432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.252930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.252939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.253152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.253161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.253595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.253603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.253979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.253986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.254353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.254361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.254750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.254759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.255075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.255087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.255467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.255475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.255860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.255869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.256237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.256246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.256618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.256627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.256992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.641 [2024-06-07 16:39:10.257000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.641 qpair failed and we were unable to recover it. 00:30:43.641 [2024-06-07 16:39:10.257230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.257238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.257608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.257615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.258004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.258013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.258278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.258289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.258665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.258673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.259038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.259046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.259421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.259430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.259823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.259831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.260209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.260216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.260504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.260513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.260897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.260905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.261288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.261297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.261578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.261586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.261931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.261939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.262298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.262307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.262739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.262747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.263140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.263147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.263516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.263525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.263793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.263801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.264186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.264194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.264438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.264445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.264658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.264667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.264935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.264944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.265180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.265188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.265536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.265544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.265900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.265909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.266285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.266294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.266692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.266700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.267086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.267094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.642 [2024-06-07 16:39:10.267488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.642 [2024-06-07 16:39:10.267496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.642 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.267912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.267921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.268307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.268314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.268594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.268602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.268972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.268981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.269368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.269377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.269702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.269710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.270072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.270080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.270476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.270484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.270790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.270798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.271177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.271187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.271474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.271482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.271845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.271853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.272241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.272252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.272627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.272635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.273013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.273020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.273418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.273427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.273811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.273820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.274227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.274237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.274413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.274422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.274883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.274891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.275247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.275254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.275644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.275653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.276063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.276071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.276440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.276448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.276929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.276936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.277322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.277329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.277720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.277728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.278084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.278092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.278482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.278494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.278871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.278879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.279244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.279252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.279681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.279689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.280114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.280122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.280514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.280522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.280883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.280893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.281265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.281274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.281652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.281660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.643 [2024-06-07 16:39:10.282015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.643 [2024-06-07 16:39:10.282025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.643 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.282400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.282420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.282767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.282775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.283145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.283153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.283342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.283349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.283787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.283796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.284064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.284072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.284470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.284478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.284882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.284894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.285266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.285274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.285576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.285585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.285958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.285968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.286350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.286359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.286744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.286752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.287118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.287126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.287488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.287498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.287926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.287934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.288302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.288311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.288526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.288534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.288768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.288777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.289138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.289150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.289486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.289494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.289922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.289930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.290300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.290308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.290537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.290545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.290951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.290959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.291355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.291363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.291687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.291695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.292064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.292071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.292458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.292467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.292812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.292820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.293060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.293067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.293317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.293324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.293645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.293655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.294025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.294032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.294295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.294303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.294701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.294708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.295068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.644 [2024-06-07 16:39:10.295076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.644 qpair failed and we were unable to recover it. 00:30:43.644 [2024-06-07 16:39:10.295429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.295437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.295715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.295724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.296098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.296105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.296336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.296344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.296485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.296493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.296684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.296693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.296903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.296911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.297256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.297264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.297639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.297647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.298022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.298031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.298267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.298274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.298657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.298665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.299028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.299036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.299467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.299477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.299730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.299738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.300106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.300115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.300551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.300560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.300838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.300849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.301080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.301089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.301452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.301460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.301713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.301720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.301974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.301983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.302338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.302345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.302768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.302777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.303238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.303247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.303579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.303587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.303940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.303947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.304311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.304319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.304723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.304732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.305125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.305133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.305399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.305409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.305712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.305720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.306073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.306082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.306313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.306321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.306721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.306729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.307000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.307008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.307199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.307207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.307593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.645 [2024-06-07 16:39:10.307602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.645 qpair failed and we were unable to recover it. 00:30:43.645 [2024-06-07 16:39:10.307861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.307869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.308250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.308259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.308640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.308649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.308987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.308995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.309361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.309370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.309775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.309784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.310046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.310054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.310357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.310365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.310753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.310762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.311185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.311194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.311481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.311495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.311761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.311768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.312131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.312139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.312524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.312532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.312707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.312716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.313048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.313056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.313470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.313478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.313753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.313762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.314140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.314148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.314517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.314527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.314886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.314895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.315278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.315286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.315522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.315530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.315940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.315949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.316344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.316352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.316646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.316655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.316989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.316998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.317387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.317395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.317637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.317645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.317870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.317881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.318274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.318283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.318680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.318688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.319067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.319076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.319499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.319507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.319855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.319864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.320233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.320242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.320593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.320601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.320994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.646 [2024-06-07 16:39:10.321001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.646 qpair failed and we were unable to recover it. 00:30:43.646 [2024-06-07 16:39:10.321413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.321421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.321799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.321806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.322157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.322166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.322501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.322508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.322909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.322917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.323303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.323311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.323655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.323664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.324028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.324036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.324197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.324206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.324695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.324704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.325046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.325054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.325418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.325426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.325901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.325908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.326176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.326184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.326450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.326459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.326841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.326849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.327232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.327240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.327523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.327533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.327915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.327924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.328298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.328307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.328590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.328599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.328976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.328986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.329202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.329210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.329639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.329647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.329852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.329860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.330120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.330128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.330518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.330526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.330983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.330992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.331241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.331250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.331623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.331631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.332030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.332039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.332435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.647 [2024-06-07 16:39:10.332443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.647 qpair failed and we were unable to recover it. 00:30:43.647 [2024-06-07 16:39:10.332802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.332812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.333186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.333195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.333499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.333507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.333875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.333882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.334132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.334140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.334502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.334510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.334900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.334909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.335175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.335184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.335499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.335507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.335889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.335896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.336223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.336231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.336482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.336489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.336910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.336917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.337334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.337341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.337775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.337783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.338149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.338161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.338493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.338501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.338817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.338826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.339194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.339202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.339529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.339538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.339937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.339945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.340306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.340318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.340679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.340688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.341079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.341086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.341350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.341357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.341621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.341630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.342012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.342020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.342387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.342395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.342773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.342782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.343138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.343147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.343518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.343527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.343971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.343979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.344335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.344343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.344674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.344682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.345113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.345122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.345586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.345594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.345953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.345962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.346328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.648 [2024-06-07 16:39:10.346336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.648 qpair failed and we were unable to recover it. 00:30:43.648 [2024-06-07 16:39:10.346727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.346736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.346940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.346949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.347217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.347225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.347429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.347439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.347611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.347619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.347997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.348005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.348384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.348392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.348675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.348684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.348926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.348933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.349178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.349186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.349445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.349453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.349811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.349819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.350225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.350232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.350618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.350626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.350997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.351005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.351265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.351273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.351552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.351561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.351916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.351924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.352332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.352342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.352657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.352666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.352968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.352976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.353244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.353252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.353540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.353548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.353914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.353921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.354190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.354197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.354522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.354531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.354897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.354905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.355263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.355271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.355752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.355761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.356121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.356129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.356514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.356523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.356955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.356964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.357353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.357361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.357636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.357643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.357965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.357974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.358385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.358394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.358630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.358638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.358867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.358875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.649 qpair failed and we were unable to recover it. 00:30:43.649 [2024-06-07 16:39:10.359263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.649 [2024-06-07 16:39:10.359271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.359655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.359662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.359990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.359997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.360474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.360481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.360852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.360860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.361226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.361233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.361510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.361517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.361795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.361803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.361995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.362003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.362400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.362411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.362663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.362670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.363036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.363043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.363412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.363420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.363709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.363716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.364026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.364035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.364298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.364306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.364529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.364536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.364932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.364939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.365368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.365375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.365735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.365743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.366091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.366101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.366469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.366477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.366938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.366945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.367313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.367320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.367695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.367704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.368088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.368096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.368459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.368467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.368739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.368747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.369134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.369143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.369400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.369410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.369699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.369707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.370076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.370084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.370442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.370449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.370729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.370737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.371085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.371093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.371459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.371467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.371848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.371856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.372177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.372185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.650 [2024-06-07 16:39:10.372412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.650 [2024-06-07 16:39:10.372421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.650 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.372757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.372765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.373157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.373165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.373497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.373506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.373899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.373907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.374290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.374297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.374691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.374699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.375061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.375068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.375408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.375416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.375769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.375778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.376039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.376046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.376267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.376274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.376651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.376659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.376988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.376996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.377387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.377395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.377765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.377773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.378131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.378140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.378538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.378546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.378724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.378732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.379094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.379101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.379293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.379300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.379723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.379731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.380096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.380106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.380304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.380312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.380700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.380708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.381061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.381069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.381382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.381391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.381679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.381687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.382076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.382084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.382485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.382494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.382754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.382762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.383121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.383129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.383507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.383516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.651 [2024-06-07 16:39:10.383941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.651 [2024-06-07 16:39:10.383950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.651 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.384306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.384313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.384640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.384649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.384920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.384928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.385281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.385289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.385577] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.385585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.385860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.385868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.386260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.386270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.386633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.386641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.387041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.387048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.387406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.387414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.387760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.387768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.388145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.388153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.388518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.388526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.388909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.388918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.389159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.389167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.389524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.389532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.389902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.389910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.390278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.390286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.390654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.390663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.391021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.391029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.391386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.391394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.391771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.391779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.392129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.392136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.392625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.392654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.393075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.393084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.393459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.393467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.393835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.393844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.394209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.394216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.394587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.394599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.394839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.394847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.395278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.395286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.395524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.395532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.395928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.395938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.396309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.396317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.396684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.396693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.397073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.397081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.397447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.397455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.652 [2024-06-07 16:39:10.397796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.652 [2024-06-07 16:39:10.397805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.652 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.398157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.398166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.398532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.398540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.398897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.398905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.399269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.399277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.399661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.399669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.400026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.400034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.400349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.400357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.400631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.400640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.401029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.401037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.401398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.401408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.401678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.401686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.402094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.402102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.402484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.402492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.402755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.402763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.403166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.403174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.403490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.403499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.403871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.403878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.404250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.404258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.404451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.404458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.404800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.404808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.405061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.405068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.405435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.405443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.405718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.405726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.406095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.406102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.406494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.406501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.406757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.406765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.407131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.407139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.407505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.407514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.407866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.407874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.408250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.408258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.408589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.408600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.408877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.408885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.409150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.409158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.409410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.409417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.409870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.409878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.410236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.410245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.410614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.410622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.410992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.653 [2024-06-07 16:39:10.411000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.653 qpair failed and we were unable to recover it. 00:30:43.653 [2024-06-07 16:39:10.411388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.411396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.411766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.411774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.412162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.412170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.412563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.412571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.412936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.412945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.413302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.413311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.413672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.413680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.414044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.414052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.414208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.414218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.414503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.414511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.414906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.414914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.415060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.415068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.415431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.415439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.415816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.415824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.416217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.416225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.416464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.416473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.416845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.416853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.417229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.417237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.417629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.417637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.417947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.417956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.418325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.418333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.418703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.418711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.418996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.419003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.419365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.419372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.419749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.419757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.420131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.420140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.420389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.420396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.420793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.420801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.421169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.421177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.421676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.421704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.422097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.422107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.422380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.422389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.422742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.422754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.423119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.423128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.423267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.423277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.423620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.423628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.424021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.654 [2024-06-07 16:39:10.424029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.654 qpair failed and we were unable to recover it. 00:30:43.654 [2024-06-07 16:39:10.424274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.424282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.424773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.424802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.425165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.425174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.425722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.425750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.426122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.426132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.426368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.426376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.426651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.426659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.427036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.427045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.427410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.427418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.427698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.427706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.428079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.428088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.428323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.428331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.428707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.428716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.429075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.429082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.429441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.429449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.429800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.429808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.430165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.430173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.430541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.430548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.430813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.430821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.431136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.431145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.431568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.431576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.431971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.431978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.432241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.432248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.432627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.432635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.432911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.432918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.433025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.433034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.433357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.433365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.433738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.433746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.434121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.434129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.434477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.434485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.434739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.434747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.435139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.435146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.435512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.435520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.435902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.435909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.436268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.436276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.436639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.436650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.437011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.437019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.437418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.437426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.655 qpair failed and we were unable to recover it. 00:30:43.655 [2024-06-07 16:39:10.437495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.655 [2024-06-07 16:39:10.437502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.437834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.437843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.438037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.438046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.438446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.438455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.438829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.438837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.439197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.439206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.439467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.439475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.439841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.439849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.440245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.440253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.440616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.440625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.440992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.441000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.441258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.441266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.441626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.441634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.442059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.442066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.442425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.442434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.442808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.442816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.443202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.443210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.443581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.443588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.443864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.443871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.444241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.444248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.444614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.444621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.444983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.444991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.445218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.445227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.445463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.445471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.445844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.445853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.446234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.446242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.446611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.446618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.446985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.446994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.447364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.447372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.447646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.447654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.447910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.447918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.448282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.448289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.448686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.448694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.656 [2024-06-07 16:39:10.449051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.656 [2024-06-07 16:39:10.449060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.656 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.449433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.449441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.449817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.449825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.450213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.450220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.450596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.450607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.450970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.450978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.451300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.451309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.451470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.451478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.451711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.451719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.452089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.452097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.452351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.452359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.452734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.452742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.453122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.453131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.453504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.453512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.453856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.453864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.454050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.454058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.454387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.454396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.454630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.454638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.455019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.455027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.455284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.455291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.455659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.455668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.456039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.456047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.456485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.456493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.456856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.456863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.457130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.457138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.457454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.457462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.457802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.457810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.458160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.458168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.458404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.458412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.458629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.458638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.459004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.459012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.459399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.459410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.459746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.459755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.460120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.460128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.460537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.460545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.460662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.460669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.461027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.461034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.461399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.461410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.461679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.657 [2024-06-07 16:39:10.461687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.657 qpair failed and we were unable to recover it. 00:30:43.657 [2024-06-07 16:39:10.462067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.462075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.462464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.462472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.462863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.462871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.463225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.463234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.463606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.463614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.463888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.463898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.464258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.464266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.464624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.464632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.464979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.464987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.465365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.465372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.465754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.465761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.466133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.466141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.466505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.466514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.466859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.466867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.467233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.467242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.467614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.467623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.467878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.467885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.468076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.468083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.468307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.468314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.468657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.468665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.469034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.469042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.469431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.469439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.469720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.469727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.469958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.469967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.470034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.470041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.470246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.470253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.470539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.470548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.470817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.470825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.471188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.471196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.471572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.471580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.471949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.471957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.472319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.472326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.472660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.472668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.473031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.473039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.658 [2024-06-07 16:39:10.473425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.658 [2024-06-07 16:39:10.473433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.658 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.473831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.473842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.474154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.474163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.474405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.474414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.474793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.474801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.475161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.475169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.475538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.475546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.475937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.475945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.476327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.476335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.476703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.476711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.477074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.477082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.477441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.477452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.477819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.477828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.478134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.478142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.478523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.478531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.478904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.478912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.479295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.479303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.479693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.479702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.480069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.480076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.480382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.480391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.480610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.480618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.938 qpair failed and we were unable to recover it. 00:30:43.938 [2024-06-07 16:39:10.480938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.938 [2024-06-07 16:39:10.480946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.481312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.481321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.481672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.481680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.481926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.481933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.482291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.482299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.482676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.482686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.483056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.483065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.483463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.483470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.483849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.483857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.484195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.484204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.484584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.484592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.484963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.484971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.485337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.485345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.485721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.485729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.486096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.486105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.486489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.486497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.486863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.486871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.487233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.487241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.487515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.487523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.487923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.487931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.488124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.488134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.488518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.488526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.488894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.488902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.489251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.489259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.489712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.489720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.490089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.490096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.490465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.490473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.490838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.490846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.491210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.491218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.491416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.491424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.491775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.939 [2024-06-07 16:39:10.491785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.939 qpair failed and we were unable to recover it. 00:30:43.939 [2024-06-07 16:39:10.492178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.492186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.492676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.492705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.493123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.493132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.493497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.493507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.493851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.493860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.494233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.494241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.494608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.494615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.494983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.494991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.495379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.495387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.495760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.495768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.496040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.496047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.496315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.496324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.496679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.496688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.497053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.497061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.497424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.497433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.497676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.497683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.498073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.498080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.498450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.498458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.498720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.498727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.499095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.499102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.499344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.499351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.499709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.499717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.499990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.499997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.500362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.500370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.500716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.500724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.501086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.501094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.501458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.501466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.501793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.501801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.502187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.502195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.502559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.502568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.502960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.502968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.503336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.503344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.503714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.503722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.504083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.504091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.504456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.504464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.504811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.504819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.505205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.505213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.940 [2024-06-07 16:39:10.505567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.940 [2024-06-07 16:39:10.505575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.940 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.505956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.505964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.506317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.506326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.506794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.506802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.507153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.507161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.507433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.507440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.507813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.507821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.508184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.508191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.508571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.508579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.508924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.508932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.509294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.509303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.509665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.509672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.509981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.509990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.510241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.510249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.510624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.510633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.510967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.510975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.511340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.511348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.511721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.511729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.512089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.512096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.512481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.512489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.512840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.512848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.513206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.513214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.513573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.513581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.513927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.513935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.514297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.514305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.514675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.514683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.515058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.515066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.515313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.515320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.515653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.515662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.516008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.516016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.516248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.516256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.516504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.516513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.516872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.516881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.517305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.941 [2024-06-07 16:39:10.517314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.941 qpair failed and we were unable to recover it. 00:30:43.941 [2024-06-07 16:39:10.517679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.517687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.518039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.518048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.518416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.518425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.518759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.518767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.519205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.519212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.519565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.519573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.519752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.519761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.520139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.520147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.520514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.520522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.520873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.520881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.521242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.521250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.521618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.521627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.521992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.521999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.522418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.522424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.522770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.522777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.523146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.523152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.523524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.523531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.523908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.523915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.524335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.524341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.524717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.524723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.525097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.525105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.525505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.525513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.525905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.525913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.526229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.526238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.526510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.526519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.526877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.526886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.527250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.527258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.527631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.527639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.528015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.528024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.528384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.528393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.528686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.528696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.529094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.529103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.529502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.529511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.529867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.529876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.530249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.530258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.530541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.530554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.530842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.942 [2024-06-07 16:39:10.530851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.942 qpair failed and we were unable to recover it. 00:30:43.942 [2024-06-07 16:39:10.531210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.531218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.531596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.531605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.531838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.531847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.532117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.532126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.532497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.532506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.532873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.532883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.533204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.533213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.533607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.533616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.534100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.534109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.534471] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.534481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.535006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.535015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.535208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.535217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.535395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.535407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.535706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.535715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.536073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.536082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.536468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.536477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.536644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.536653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.537028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.537037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.537378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.537387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.537664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.537674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.537908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.537917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.538150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.538159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.538537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.538546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.538932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.538940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.539209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.539218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.539381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.539390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.539600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.943 [2024-06-07 16:39:10.539609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.943 qpair failed and we were unable to recover it. 00:30:43.943 [2024-06-07 16:39:10.539839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.539847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.540169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.540178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.540586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.540595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.540942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.540950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.541283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.541292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.541718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.541727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.542079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.542088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.542280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.542290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.542637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.542646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.543037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.543045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.543412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.543421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.543783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.543796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.544211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.544218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.544544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.544551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.544895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.544903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.545226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.545235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.545544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.545552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.545931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.545940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.546309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.546317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.546623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.546630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.546881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.546889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.547233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.547241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.547509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.547516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.547784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.547791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.548070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.548078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.548446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.548454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.548859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.548868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.549228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.549237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.549626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.549634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.549994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.550001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.550184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.550192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.550647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.550655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.550846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.550853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.550972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.550980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.551324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.551332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.551499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.944 [2024-06-07 16:39:10.551507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.944 qpair failed and we were unable to recover it. 00:30:43.944 [2024-06-07 16:39:10.551737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.551746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.552091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.552098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.552469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.552478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.552857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.552865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.553156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.553164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.553513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.553521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.553810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.553818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.554179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.554187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.554509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.554518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.554913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.554920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.555185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.555192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.555631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.555639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.556008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.556017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.556382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.556389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.556775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.556783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.557029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.557038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.557361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.557369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.557634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.557642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.558024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.558032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.558433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.558442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.558827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.558835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.559221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.559230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.559611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.559619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.559977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.559986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.560363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.560371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.560744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.560753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.561120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.561128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.561495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.561503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.561742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.561750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.562099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.562107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.562470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.562479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.562748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.562756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.563124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.563131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.563510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.563518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.563890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.563898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.564271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.945 [2024-06-07 16:39:10.564279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.945 qpair failed and we were unable to recover it. 00:30:43.945 [2024-06-07 16:39:10.564714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.564721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.564959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.564966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.565325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.565332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.565718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.565726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.566083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.566090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.566460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.566468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.566715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.566723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.567088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.567096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.567462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.567470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.567851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.567858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.568220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.568229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.568615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.568623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.568987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.568995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.569384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.569392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.569763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.569771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.570145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.570152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.570517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.570524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.570748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.570757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.571151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.571159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.571515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.571526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.571916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.571924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.572277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.572285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.572651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.572660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.573028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.573036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.573400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.573410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.573787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.573794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.574174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.574181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.574636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.574665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.575048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.575058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.575409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.575417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.575791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.575799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.576166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.576175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.576645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.576674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.577029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.577039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.577433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.577442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.577859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.577867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.578234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.578242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.946 qpair failed and we were unable to recover it. 00:30:43.946 [2024-06-07 16:39:10.578521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.946 [2024-06-07 16:39:10.578529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.578764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.578771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.579044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.579051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.579417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.579425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.579802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.579810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.580181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.580189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.580526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.580536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.580825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.580833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.581236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.581245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.581624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.581632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.582000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.582009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.582375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.582382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.582758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.582765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.583126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.583134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.583363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.583371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.583703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.583711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.584099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.584108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.584481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.584496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.584879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.584887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.585253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.585261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.585536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.585544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.585910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.585918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.586284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.586293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.586706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.586714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.587108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.587115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.587482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.587490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.587877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.587884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.588114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.588122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.588385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.588393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.588750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.588759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.588987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.588995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.589362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.589370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.589820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.589830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.590029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.590039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.590327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.590336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.590710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.590718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.591109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.591117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.591506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.947 [2024-06-07 16:39:10.591514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.947 qpair failed and we were unable to recover it. 00:30:43.947 [2024-06-07 16:39:10.591900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.591908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.592275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.592282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.592555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.592563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.592904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.592912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.593276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.593283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.593738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.593746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.594128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.594137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.594623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.594653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.595006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.595016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.595422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.595430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.595679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.595688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.596064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.596071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.596441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.596449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.596797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.596805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.597199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.597206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.597594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.597603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.597880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.597888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.598120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.598129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.598387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.598394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.598759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.598767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.598962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.598972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.599337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.599345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.599714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.599722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.600154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.600162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.600527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.600539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.600905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.600913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.601302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.948 [2024-06-07 16:39:10.601310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.948 qpair failed and we were unable to recover it. 00:30:43.948 [2024-06-07 16:39:10.601679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.601687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.602075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.602084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.602447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.602456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.602838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.602846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.603206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.603213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.603476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.603484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.603868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.603875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.604230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.604238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.604616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.604624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.604990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.604998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.605187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.605195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.605533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.605542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.605913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.605922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.606272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.606281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.606645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.606654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.606941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.606949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.607309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.607317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.607671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.607680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.608039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.608046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.608274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.608281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.608718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.608726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.609086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.609094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.609452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.609459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.609855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.609863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.610255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.610262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.610626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.610634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.611000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.611007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.611360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.611368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.611764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.611772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.612143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.612151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.612647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.612676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.613063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.613073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.613439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.613447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.613816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.613824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.614192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.614200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.949 qpair failed and we were unable to recover it. 00:30:43.949 [2024-06-07 16:39:10.614574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.949 [2024-06-07 16:39:10.614584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.614934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.614943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.615313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.615324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.615692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.615700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.616083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.616091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.616458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.616466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.616845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.616853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.617080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.617088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.617486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.617494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.617746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.617754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.618119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.618127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.618491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.618499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.618874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.618882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.619242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.619251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.619623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.619631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.619999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.620006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.620314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.620322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.620489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.620499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.620773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.620781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.621142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.621150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.621572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.621580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.621905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.621913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.622363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.622371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.622742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.622750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.623143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.623151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.623504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.623512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.623877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.623886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.624079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.624088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.624420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.624430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.624860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.624868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.625234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.625242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.625687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.625695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.625971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.625978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.626339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.626347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.950 qpair failed and we were unable to recover it. 00:30:43.950 [2024-06-07 16:39:10.626719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.950 [2024-06-07 16:39:10.626727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.627114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.627122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.627355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.627362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.627721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.627729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.628088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.628096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.628459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.628468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.628735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.628742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.629098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.629106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.629346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.629355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.629728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.629737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.630090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.630097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.630452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.630460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.630770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.630779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.631159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.631167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.631516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.631524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.631899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.631907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.632275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.632283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.632731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.632739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.633122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.633130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.633491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.633498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.633890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.633897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.634262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.634270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.634625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.634634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.635071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.635079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.635557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.635586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.635959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.635969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.636244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.636251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.636620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.636628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.636989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.636997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.637367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.637375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.637730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.637739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.637944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.637954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.638283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.638291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.638650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.638658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.639021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.639029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.639391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.639399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.639655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.639663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.951 [2024-06-07 16:39:10.640031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.951 [2024-06-07 16:39:10.640038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.951 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.640425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.640433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.640827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.640835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.641198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.641206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.641578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.641586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.641961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.641969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.642333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.642341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.642715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.642724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.642977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.642985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.643145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.643153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.643527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.643535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.643916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.643926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.644291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.644300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.644676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.644685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.645053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.645061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.645411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.645419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.645666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.645675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.645948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.645955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.646318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.646326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.646681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.646689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.647004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.647011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.647372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.647380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.647757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.647765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.648130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.648139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.648370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.648378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.648777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.648786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.649139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.649147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.649540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.649548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.649916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.649924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.650308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.650316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.650592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.650600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.650964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.650971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.651364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.651371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.651768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.651777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.652167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.652175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.652573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.652581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.652963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.652971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.952 qpair failed and we were unable to recover it. 00:30:43.952 [2024-06-07 16:39:10.653362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.952 [2024-06-07 16:39:10.653371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.653723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.653732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.654098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.654106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.654472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.654480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.654850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.654858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.655203] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.655212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.655564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.655572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.655819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.655827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.656168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.656175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.656541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.656550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.657047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.657055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.657423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.657432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.657795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.657803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.658081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.658088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.658513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.658522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.658892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.658899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.659299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.659307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.659681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.659689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.660083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.660091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.660351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.660359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.660735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.660743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.661113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.661121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.661492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.661500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.661884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.661892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.662296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.662305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.662562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.662571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.662948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.662956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.663303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.663311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.953 [2024-06-07 16:39:10.663692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.953 [2024-06-07 16:39:10.663700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.953 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.663974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.663982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.664359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.664367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.664733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.664741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.665015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.665023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.665390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.665399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.665532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.665540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.665903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.665912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.666278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.666287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.666582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.666591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.666975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.666984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.667353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.667361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.667716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.667725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.668083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.668092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.668452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.668462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.668842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.668850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.669213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.669221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.669730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.669738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.670093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.670101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.670351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.670359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.670584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.670592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.670966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.670973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.671350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.671357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.671656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.671665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.672074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.672082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.672346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.672354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.672767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.672779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.673153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.673161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.673523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.673532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.673912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.673920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.674288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.674296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.674730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.674738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.674975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.674983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.675249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.675258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.675628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.675636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.675946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.954 [2024-06-07 16:39:10.675954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.954 qpair failed and we were unable to recover it. 00:30:43.954 [2024-06-07 16:39:10.676225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.676234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.676503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.676512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.676816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.676824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.677200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.677208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.677487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.677495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.677878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.677886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.678255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.678263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.678581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.678590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.678797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.678807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.679096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.679104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.679490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.679498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.679717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.679726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.680156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.680164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.680534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.680543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.680921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.680928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.681296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.681304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.681686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.681696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.682060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.682068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.682448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.682456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.682856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.682863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.683328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.683336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.683696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.683704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.684077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.684086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.684458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.684466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.684824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.684832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.685024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.685032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.685405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.685413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.685797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.685805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.686197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.686205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.686617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.686625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.686984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.686994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.687366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.687374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.687742] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.687751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.688018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.955 [2024-06-07 16:39:10.688026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.955 qpair failed and we were unable to recover it. 00:30:43.955 [2024-06-07 16:39:10.688393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.688404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.688722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.688731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.689108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.689116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.689488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.689496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.689877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.689884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.690251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.690259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.690644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.690652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.691033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.691041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.691415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.691423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.691820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.691828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.692217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.692225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.692459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.692467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.692859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.692868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.693234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.693242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.693607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.693615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.693974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.693982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.694349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.694357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.694744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.694752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.695137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.695145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.695454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.695463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.695836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.695843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.696206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.696214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.696571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.696579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.696931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.696940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.697306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.697314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.697673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.697681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.698062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.698070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.698426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.698434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.698792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.698801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.699144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.699152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.699543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.699551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.699987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.699995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.700349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.700356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.700484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.700491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.700887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.700895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.701256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.701265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.956 qpair failed and we were unable to recover it. 00:30:43.956 [2024-06-07 16:39:10.701627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.956 [2024-06-07 16:39:10.701635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.702065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.702073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.702416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.702425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.702791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.702799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.702960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.702969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.703241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.703249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.703636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.703644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.704007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.704014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.704375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.704382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.704759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.704767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.705150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.705158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.705300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.705308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.705581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.705590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.705964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.705971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.706366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.706374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.706740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.706749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.707100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.707108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.707477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.707486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.707865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.707873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.708260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.708268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.708510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.708518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.708899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.708907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.709290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.709298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.709674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.709682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.710048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.710057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.710421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.710429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.710768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.710775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.711137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.711147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.711421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.711429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.711760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.711768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.712153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.712161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.712521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.712529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.712867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.712875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.713239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.713247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.713603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.713611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.957 qpair failed and we were unable to recover it. 00:30:43.957 [2024-06-07 16:39:10.713978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.957 [2024-06-07 16:39:10.713986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.714342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.714349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.714715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.714724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.714993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.715000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.715363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.715372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.715739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.715747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.716116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.716124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.716513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.716521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.716903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.716912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.717277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.717286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.717664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.717673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.717983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.717991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.718354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.718362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.718760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.718768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.719133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.719141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.719482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.719490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.719876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.719884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.720248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.720257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.720624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.720633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.721027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.721036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.721393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.721403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.721686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.721695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.722023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.722030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.722347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.722355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.722724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.722732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.723096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.723104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.723448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.723457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.723846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.723853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.724218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.958 [2024-06-07 16:39:10.724226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.958 qpair failed and we were unable to recover it. 00:30:43.958 [2024-06-07 16:39:10.724592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.724600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.724979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.724986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.725390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.725398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.725787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.725796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.726160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.726168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.726650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.726679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.727073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.727084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.727476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.727485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.727804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.727813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.728201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.728208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.728605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.728613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.728987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.728994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.729362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.729370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.729718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.729726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.730119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.730126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.730499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.730507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.730880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.730889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.731254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.731262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.731623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.731631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.732014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.732022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.732386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.732393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.732753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.732761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.733144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.733152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.733624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.733653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.733851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.733861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.733917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.733926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.734262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.734270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.734721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.734729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.735091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.735099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.735470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.735479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.735933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.735941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.736195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.736203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.736570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.736578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.736953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.736961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.737326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.737334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.737696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.737704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.737976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.737984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.959 qpair failed and we were unable to recover it. 00:30:43.959 [2024-06-07 16:39:10.738352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.959 [2024-06-07 16:39:10.738360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.738757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.738765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.739125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.739133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.739494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.739502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.739785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.739792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.740157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.740165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.740516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.740526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.740886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.740894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.741261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.741268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.741536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.741543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.741925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.741933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.742293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.742301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.742567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.742575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.742835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.742843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.743225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.743233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.743598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.743606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.743971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.743979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.744342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.744351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.744699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.744707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.745069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.745076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.745339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.745347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.745719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.745727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.746105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.746113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.746480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.746488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.746856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.746864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.747287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.747295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.747686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.747694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.748055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.748063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.748430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.748439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.748738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.748746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.749121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.749129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.749574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.749582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.749943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.749950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.750302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.750310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.750676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.750685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.751003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.751011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.960 qpair failed and we were unable to recover it. 00:30:43.960 [2024-06-07 16:39:10.751420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.960 [2024-06-07 16:39:10.751427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.751756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.751764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.751956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.751964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.752306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.752314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.752654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.752663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.753028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.753036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.753426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.753434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.753793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.753801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.754169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.754177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.754529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.754537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.754821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.754830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.755192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.755199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.755574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.755582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.755817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.755825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.756083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.756090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.756459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.756467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.756802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.756811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.757179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.757186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.757527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.757535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.757906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.757914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.758278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.758286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.758642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.758650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.758996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.759004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.759368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.759376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.759748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.759757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.760124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.760132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.760493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.760501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.760877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.760886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.761252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.761260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.761628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.761635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.761993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.762001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.762229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.762237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.762658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.762666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.763028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.763036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.763423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.763432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.763600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.961 [2024-06-07 16:39:10.763608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.961 qpair failed and we were unable to recover it. 00:30:43.961 [2024-06-07 16:39:10.763873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.763880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.764267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.764275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.764639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.764647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.765011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.765019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.765419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.765427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.765609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.765617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.765780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.765787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.766139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.766148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.766554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.766562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.766931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.766939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.767329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.767337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.767717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.767725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.768093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.768101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.768472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.768481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.768841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.768852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.769209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.769217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.769585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.769593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.769958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.769965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.770360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.770368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.770711] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.770718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.771087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.771095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.771411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.771419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.771697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.771705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.772060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.772068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.772445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.772454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.772788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.772797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.773173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.773181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.773578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.773585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:43.962 [2024-06-07 16:39:10.773965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.962 [2024-06-07 16:39:10.773973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:43.962 qpair failed and we were unable to recover it. 00:30:44.234 [2024-06-07 16:39:10.774347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.234 [2024-06-07 16:39:10.774356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.234 qpair failed and we were unable to recover it. 00:30:44.234 [2024-06-07 16:39:10.774807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.234 [2024-06-07 16:39:10.774815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.234 qpair failed and we were unable to recover it. 00:30:44.234 [2024-06-07 16:39:10.775190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.234 [2024-06-07 16:39:10.775199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.234 qpair failed and we were unable to recover it. 00:30:44.234 [2024-06-07 16:39:10.775583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.775592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.775958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.775966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.776319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.776326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.776683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.776692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.776965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.776973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.777341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.777350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.777683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.777692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.778068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.778076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.778457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.778465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.778863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.778871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.779267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.779275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.779642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.779650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.779996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.780004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.780369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.780378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.780846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.780854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.781216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.781223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.781393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.781411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.781734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.781742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.782124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.782133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.782625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.782654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.782995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.783004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.783372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.783380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.783717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.783729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.784091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.784099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.784464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.784472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.784841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.784848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.785264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.785272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.785651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.785659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.786023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.786031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.786400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.786415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.786782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.786790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.787152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.787160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.787647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.787676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.787908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.787919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.788159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.788167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.788540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.788548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.788631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.235 [2024-06-07 16:39:10.788639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.235 qpair failed and we were unable to recover it. 00:30:44.235 [2024-06-07 16:39:10.788981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.788989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.789356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.789364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.789754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.789762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.790120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.790128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.790513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.790520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.790886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.790893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.791201] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.791218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.791586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.791594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.791952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.791960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.792316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.792324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.792677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.792685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.792940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.792948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.793313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.793321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.793689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.793698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.794097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.794106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.794482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.794491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.794869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.794878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.795244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.795253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.795615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.795623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.795988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.795997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.796355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.796363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.796716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.796725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.797124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.797132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.797502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.797510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.797683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.797691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.798071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.798081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.798329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.798336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.798690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.798698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.799126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.799134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.799500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.799508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.799899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.799907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.800272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.800280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.800642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.800650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.801014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.801022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.801414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.801422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.801759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.801767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.802136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.802143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.236 qpair failed and we were unable to recover it. 00:30:44.236 [2024-06-07 16:39:10.802628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.236 [2024-06-07 16:39:10.802656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.803013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.803023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.803396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.803409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.803803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.803811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.804219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.804226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.804686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.804716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.805135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.805145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.805622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.805650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.806017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.806026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.806420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.806428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.806825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.806833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.807206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.807214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.807393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.807406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.807779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.807787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.808170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.808178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.808718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.808747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.809074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.809084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.809345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.809353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.809796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.809804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.810163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.810171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.810647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.810676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.811049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.811059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.811419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.811428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.811748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.811757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.812117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.812125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.812489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.812497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.812869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.812877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.813237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.813245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.813623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.813634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.813843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.813851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.814190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.814198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.814575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.814583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.814930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.814938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.815294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.815301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.815668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.815676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.816077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.816085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.816448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.816456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.237 qpair failed and we were unable to recover it. 00:30:44.237 [2024-06-07 16:39:10.816839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.237 [2024-06-07 16:39:10.816847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.817233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.817242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.817595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.817603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.817960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.817967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.818331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.818338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.818406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.818416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.818784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.818792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.819151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.819159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.819523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.819531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.819909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.819916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.820147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.820155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.820368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.820376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.820737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.820746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.820935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.820942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.821119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.821127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.821244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.821253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.821547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.821555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.821927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.821934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.822337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.822345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.822774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.822783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.823139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.823147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.823516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.823524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.823840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.823848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.824027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.824035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.824394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.824404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.824764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.824771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.825124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.825131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.825542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.825550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.825913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.825922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.826279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.826288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.826665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.826673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.827065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.827075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.827440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.827448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.827759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.827767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.828132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.828139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.828526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.828534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.828877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.828885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.238 qpair failed and we were unable to recover it. 00:30:44.238 [2024-06-07 16:39:10.829242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.238 [2024-06-07 16:39:10.829251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.829570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.829578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.829949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.829958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.830318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.830326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.830697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.830706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.831071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.831079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.831470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.831479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.831852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.831860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.832268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.832276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.832639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.832649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.833044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.833052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.833316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.833323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.833512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.833521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.833847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.833855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.834261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.834269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.834638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.834646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.835002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.835010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.835374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.835381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.835754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.835762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.836127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.836134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.836498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.836506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.836873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.836881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.837152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.837160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.837572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.837580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.837911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.837919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.838278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.838286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.838651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.838658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.839018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.839026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.839364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.839373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.839729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.839737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.840131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.239 [2024-06-07 16:39:10.840139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.239 qpair failed and we were unable to recover it. 00:30:44.239 [2024-06-07 16:39:10.840350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.840358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.840736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.840744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.841102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.841110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.841494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.841504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.841875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.841883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.842250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.842258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.842617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.842625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.843010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.843019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.843291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.843300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.843676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.843685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.844050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.844058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.844443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.844451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.844819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.844828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.845265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.845273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.845553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.845561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.845941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.845949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.846145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.846153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.846487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.846495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.846881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.846889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.847239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.847247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.847626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.847634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.847912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.847920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.848284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.848291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.848661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.848670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.848865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.848874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.849205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.849214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.849581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.849589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.849972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.849980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.850332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.850339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.850780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.850788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.851142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.851151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.851534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.851542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.851914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.851923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.852287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.852295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.852681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.852689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.853081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.853089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.853450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.853458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.853797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.240 [2024-06-07 16:39:10.853804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.240 qpair failed and we were unable to recover it. 00:30:44.240 [2024-06-07 16:39:10.854168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.854175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.854560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.854569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.854939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.854947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.855312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.855320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.855463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.855470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.855858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.855867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.856268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.856275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.856637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.856646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.857029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.857037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.857383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.857391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.857743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.857751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.858141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.858150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.858564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.858573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.858959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.858967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.859328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.859337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.859578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.859587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.859968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.859977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.860348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.860357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.860713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.860722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.861087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.861097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.861499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.861507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.861910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.861918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.862280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.862288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.862558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.862566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.862915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.862923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.863241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.863249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.863557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.863565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.863784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.863791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.864158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.864166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.864553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.864561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.864891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.864900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.865310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.865322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.865582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.865591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.865997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.866005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.866356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.866364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.866714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.866724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.867089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.867096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.867373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.241 [2024-06-07 16:39:10.867381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.241 qpair failed and we were unable to recover it. 00:30:44.241 [2024-06-07 16:39:10.867736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.867745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.868109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.868117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.868483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.868491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.868901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.868910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.869282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.869290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.869530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.869538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.869893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.869901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.870157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.870168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.870520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.870529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.870844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.870853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.871209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.871218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.871602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.871610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.871877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.871885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.872297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.872305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.872708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.872716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.872888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.872896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.873243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.873251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.873674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.873682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.874094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.874101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.874548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.874557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.874939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.874948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.875349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.875358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.875706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.875714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.876092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.876101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.876464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.876472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.876862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.876870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.877233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.877241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.877603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.877611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.877881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.877888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.878256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.878263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.878750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.878758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.879058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.879066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.879431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.879439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.879865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.879873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.880137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.880145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.880539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.880547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.880923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.880931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.242 [2024-06-07 16:39:10.881255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.242 [2024-06-07 16:39:10.881264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.242 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.881643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.881651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.882006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.882014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.882399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.882409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.882780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.882788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.883156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.883163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.883678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.883706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.884072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.884082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.884460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.884469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.884861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.884870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.885347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.885358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.885659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.885668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.886053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.886061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.886423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.886432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.886845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.886853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.887092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.887100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.887483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.887491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.887878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.887886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.888286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.888294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.888779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.888787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.889178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.889185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.889545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.889553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.889960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.889968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.890366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.890374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.890537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.890548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.890927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.890935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.891360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.891368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.891546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.891555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.891918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.891926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.892291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.892299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.892493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.892502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.892859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.892866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.893234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.893242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.893639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.893647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.894013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.894021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.894379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.243 [2024-06-07 16:39:10.894386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.243 qpair failed and we were unable to recover it. 00:30:44.243 [2024-06-07 16:39:10.894681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.894689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.895054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.895064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.895411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.895419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.895705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.895713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.895993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.896000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.896366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.896374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.896739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.896748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.897116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.897125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.897511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.897520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.897906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.897915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.898151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.898160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.898534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.898542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.898924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.898932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.899298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.899306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.899689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.899697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.900064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.900072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.900440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.900448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.900808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.900817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.901212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.901220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.901506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.901513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.901733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.901741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.902114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.902121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.902515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.902523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.902854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.902861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.903230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.903238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.903609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.903617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.904003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.904012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.904421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.904430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.904687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.904694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.905118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.905126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.905520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.905529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.244 qpair failed and we were unable to recover it. 00:30:44.244 [2024-06-07 16:39:10.905919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.244 [2024-06-07 16:39:10.905927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.906194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.906201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.906454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.906463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.906866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.906873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.907239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.907247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.907540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.907548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.907939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.907946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.908326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.908334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.908721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.908730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.909099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.909108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.909478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.909492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.909875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.909882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.910249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.910257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.910490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.910499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.910759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.910767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.911120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.911129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.911493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.911501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.911891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.911899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.912247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.912256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.912627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.912635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.913003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.913010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.913382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.913390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.913765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.913773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.914161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.914168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.914416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.914424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.914717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.914724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.915095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.915103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.915481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.915490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.915902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.915909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.916133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.916140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.916546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.916555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.916943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.916951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.917232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.917239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.917630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.917638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.918007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.918015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.918245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.918254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.918543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.918551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.918881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.245 [2024-06-07 16:39:10.918890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.245 qpair failed and we were unable to recover it. 00:30:44.245 [2024-06-07 16:39:10.919260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.919268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.919549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.919557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.919923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.919930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.920303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.920311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.920494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.920502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.920886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.920894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.921263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.921271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.921651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.921660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.921859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.921868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.922212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.922220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.922587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.922595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.922888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.922897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.923275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.923284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.923644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.923652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.924017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.924024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.924254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.924261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.924718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.924726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.925106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.925114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.925646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.925674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.925969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.925979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.926080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.926087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.926472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.926480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.926832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.926840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.927210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.927218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.927588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.927597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.927984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.927992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.928363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.928371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.928608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.928616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.928852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.928860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.929252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.929260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.929628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.929635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.930012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.930020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.930379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.930387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.930758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.930766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.931152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.931160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.931520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.931528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.931789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.931796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.932195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.932203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.246 qpair failed and we were unable to recover it. 00:30:44.246 [2024-06-07 16:39:10.932468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.246 [2024-06-07 16:39:10.932476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.932860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.932868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.933096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.933103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.933506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.933515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.933887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.933896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.934284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.934291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.934686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.934694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.935083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.935091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.935332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.935339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.935716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.935724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.935984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.935991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.936376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.936384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.936609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.936616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.936961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.936969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.937335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.937345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.937721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.937729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.937960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.937968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.938241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.938248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.938630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.938638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.939066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.939073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.939436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.939445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.939608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.939616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.940014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.940022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.940383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.940391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.940514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.940522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.940890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.940898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.941274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.941282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.941644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.941652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.942066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.942073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.942431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.942438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.942791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.942799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.943193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.943201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.943580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.943589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.943869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.943877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.944248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.944255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.944624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.944632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.945042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.945049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.945293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.945300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.247 qpair failed and we were unable to recover it. 00:30:44.247 [2024-06-07 16:39:10.945753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.247 [2024-06-07 16:39:10.945761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.945948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.945956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.946325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.946333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.946403] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.946411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.946748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.946757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.947121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.947129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.947526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.947534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.947948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.947956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.948299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.948307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.948558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.948566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.948912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.948920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.949289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.949298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.949678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.949688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.950073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.950081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.950439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.950447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.950811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.950819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.951170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.951179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.951543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.951552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.951925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.951933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.952297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.952304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.952680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.952688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.953056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.953064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.953395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.953409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.953792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.953800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.954170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.954177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.954407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.954415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.954787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.954795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.955157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.955165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.955621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.955649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.956027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.956037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.956429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.956438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.956693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.248 [2024-06-07 16:39:10.956701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.248 qpair failed and we were unable to recover it. 00:30:44.248 [2024-06-07 16:39:10.957073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.957081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.957282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.957293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.957624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.957632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.957994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.958002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.958411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.958419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.958791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.958799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.959193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.959201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.959525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.959534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.959898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.959906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.960273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.960282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.960641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.960649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.961015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.961024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.961395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.961411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.961696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.961704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.962105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.962114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.962385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.962393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.962789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.962797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.963159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.963168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.963649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.963678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.964052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.964062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.964140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.964146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.964477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.964487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.964853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.964860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.965245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.965253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.965628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.965639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.965874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.965881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.966246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.966254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.966615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.966624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.966885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.966893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.967144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.967151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.967513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.967521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.967864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.967872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.968214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.968222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.968587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.968595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.968777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.968787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.969154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.969162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.969507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.969515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.249 [2024-06-07 16:39:10.969691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.249 [2024-06-07 16:39:10.969699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.249 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.970082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.970090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.970477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.970485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.970758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.970766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.971178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.971186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.971380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.971388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.971750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.971758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.972022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.972030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.972389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.972397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.972766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.972774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.973112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.973120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.973483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.973492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.973862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.973870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.974240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.974248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.974511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.974519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.974782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.974790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.975141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.975149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.975591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.975599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.975969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.975977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.976343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.976352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.976715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.976723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.977078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.977086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.977475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.977483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.977847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.977854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.978128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.978135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.978451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.978459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.978815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.978824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.979232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.979243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.979600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.979609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.980015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.980022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.980409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.980418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.980782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.980790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.981166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.981175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.981527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.981535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.981928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.981935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.982293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.982301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.982668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.982676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.983044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.983051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.983435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.250 [2024-06-07 16:39:10.983443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.250 qpair failed and we were unable to recover it. 00:30:44.250 [2024-06-07 16:39:10.983805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.983813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.984174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.984183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.984492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.984500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.984730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.984737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.985125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.985133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.985450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.985457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.985841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.985849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.986243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.986251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.986692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.986700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.987065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.987074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.987438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.987446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.987630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.987639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.988000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.988008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.988236] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.988243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.988610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.988618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.988965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.988974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.989202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.989210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.989473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.989481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.989844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.989852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.990234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.990241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.990606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.990614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.990980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.990988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.991426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.991434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.991661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.991669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.992035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.992043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.992297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.992305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.992683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.992691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.993075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.993082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.993446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.993456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.993821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.993828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.994195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.994204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.994394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.994405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.994745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.994752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.995008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.995015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.251 [2024-06-07 16:39:10.995385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.251 [2024-06-07 16:39:10.995393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.251 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.995778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.995786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.996096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.996103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.996427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.996435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.996615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.996623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.996960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.996967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.997330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.997337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.997703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.997712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.998058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.998066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.998324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.998331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.998670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.998679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.999043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.999051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.999446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.999454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:10.999834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:10.999842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.000278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.000286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.000460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.000468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.000807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.000815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.001007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.001015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.001356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.001365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.001596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.001605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.001959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.001967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.002221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.002230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.002503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.002510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.002876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.002884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.003257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.003265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.003616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.003625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.003994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.004002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.004366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.004374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.004724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.004732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.005084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.005091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.005319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.005326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.005693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.005701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.006112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.006119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.006502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.006510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.006773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.006783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.007055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.007063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.007429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.007438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.007793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.007801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.252 [2024-06-07 16:39:11.008164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.252 [2024-06-07 16:39:11.008173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.252 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.008534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.008542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.008790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.008797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.009184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.009192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.009553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.009562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.009931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.009939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.010198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.010205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.010520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.010527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.010914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.010922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.011286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.011294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.011688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.011696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.011947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.011954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.012317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.012325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.012677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.012685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.012944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.012951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.013304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.013311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.013733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.013741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.014096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.014105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.014472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.014480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.014729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.014737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.015100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.015108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.015474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.015482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.015849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.015857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.016241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.016248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.016611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.016619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.016882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.016889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.017256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.017263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.017518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.017526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.017901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.017909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.018272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.018280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.018637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.018645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.253 [2024-06-07 16:39:11.019028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.253 [2024-06-07 16:39:11.019036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.253 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.019483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.019491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.019847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.019856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.020224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.020232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.020602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.020611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.020988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.020997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.021362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.021370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.021728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.021736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.022060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.022068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.022440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.022448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.022678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.022686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.023054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.023061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.023418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.023426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.023800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.023808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.024169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.024177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.024533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.024540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.024938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.024946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.025388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.025396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.025759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.025768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.026139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.026147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.026539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.026548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.026912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.026920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.027280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.027289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.027557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.027565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.027812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.027819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.028191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.028200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.028570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.028578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.028941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.028950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.029371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.029378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.029737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.029746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.030112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.030120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.030487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.030496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.030854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.030862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.031224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.031232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.031595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.031603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.031967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.031974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.032361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.032369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.032734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.032742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.033110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.254 [2024-06-07 16:39:11.033121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.254 qpair failed and we were unable to recover it. 00:30:44.254 [2024-06-07 16:39:11.033516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.033524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.033716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.033724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.034101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.034108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.034550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.034558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.034873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.034880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.035272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.035280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.035659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.035668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.036018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.036026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.036390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.036398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.036761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.036768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.037123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.037131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.037381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.037388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.037756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.037764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.038147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.038155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.038519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.038527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.038875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.038882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.039198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.039207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.039591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.039599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.039963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.039971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.040338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.040346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.040604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.040612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.040997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.041005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.041368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.041376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.041732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.041740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.042104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.042112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.042498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.042507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.042889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.042898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.043268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.043277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.043637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.043645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.044029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.044036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.044397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.044408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.044766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.044773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.045123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.045131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.045579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.045607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.045965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.045975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.046344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.046352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.046727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.046736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.047113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.255 [2024-06-07 16:39:11.047122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.255 qpair failed and we were unable to recover it. 00:30:44.255 [2024-06-07 16:39:11.047486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.047494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.047857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.047865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.048071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.048081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.048438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.048446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.048813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.048821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.049011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.049019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.049377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.049385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.049775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.049783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.050161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.050175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.050538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.050546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.050796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.050804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.051066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.051074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.051436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.051443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.051805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.051814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.052180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.052189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.052572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.052580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.052983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.052991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.053260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.053267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.053584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.053592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.053987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.053995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.054357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.054366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.054728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.054736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.054993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.055001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.055400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.055411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.055591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.055600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.055868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.055876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.056281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.056290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.056648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.056656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.057031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.057040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.057415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.057424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.057754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.057762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.058091] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.058099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.058428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.058437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.058808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.058816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.059165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.059173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.059512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.059520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.059892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.059900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.060262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.060271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.256 qpair failed and we were unable to recover it. 00:30:44.256 [2024-06-07 16:39:11.060641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.256 [2024-06-07 16:39:11.060649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.061033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.061041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.061412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.061421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.061684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.061693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.061874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.061883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.062262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.062270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.062634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.062644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.063006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.063015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.063380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.063388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.063741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.063749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.064116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.064126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.064488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.064495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.064876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.064885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.065262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.065270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.065634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.065642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.066006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.066013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.066277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.066284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.066738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.066767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.067142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.067151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.067525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.067534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.067917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.067925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.068310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.068317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.068659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.068668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.069037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.069045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.069410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.069418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.069700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.069707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.069964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.069972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.070336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.070343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.070649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.070656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.071039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.071047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.071452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.071460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.071826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.071833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.072198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.072207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.072569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.072577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.072959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.072967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.073233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.073241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.073613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.073621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.073997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.074006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.257 [2024-06-07 16:39:11.074372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.257 [2024-06-07 16:39:11.074380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.257 qpair failed and we were unable to recover it. 00:30:44.258 [2024-06-07 16:39:11.074751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.258 [2024-06-07 16:39:11.074759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.258 qpair failed and we were unable to recover it. 00:30:44.258 [2024-06-07 16:39:11.075125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.258 [2024-06-07 16:39:11.075133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.258 qpair failed and we were unable to recover it. 00:30:44.258 [2024-06-07 16:39:11.075495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.258 [2024-06-07 16:39:11.075504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.258 qpair failed and we were unable to recover it. 00:30:44.258 [2024-06-07 16:39:11.075944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.258 [2024-06-07 16:39:11.075952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.258 qpair failed and we were unable to recover it. 00:30:44.530 [2024-06-07 16:39:11.076258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.076268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.076637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.076646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.076994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.077002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.077365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.077373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.077745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.077753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.078117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.078125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.078472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.078480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.078812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.078821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.079215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.079223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.079608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.079616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.079995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.080003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.080367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.080375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.080750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.080759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.080953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.080963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.081312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.081319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.081690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.081699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.082064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.082072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.082446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.082454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.082852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.082862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.083290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.083298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.083773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.083782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.084144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.084153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.084541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.084549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.084927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.084934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.085287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.085295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.085691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.085698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.086085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.086092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.086468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.086477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.086815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.086824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.087016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.087025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.087362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.087371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.087564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.087574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.087958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.087967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.088384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.531 [2024-06-07 16:39:11.088392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.531 qpair failed and we were unable to recover it. 00:30:44.531 [2024-06-07 16:39:11.088758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.088768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.089134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.089141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.089516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.089525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.089810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.089818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.090200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.090209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.090499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.090507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.090878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.090887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.091151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.091159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.091534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.091542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.091920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.091928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.092295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.092302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.092664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.092672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.093056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.093064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.093428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.093437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.093783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.093790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.094099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.094107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.094493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.094501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.094766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.094773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.095146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.095154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.095519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.095529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.095748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.095757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.096107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.096115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.096482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.096490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.096868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.096876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.097228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.097235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.097597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.097605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.097981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.097989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.098359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.098367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.098730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.098738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.099124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.099132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.099372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.099380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.099582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.099590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.099777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.099785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.100116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.100124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.100577] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.100585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.100959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.100968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.101342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.101350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.532 [2024-06-07 16:39:11.101701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.532 [2024-06-07 16:39:11.101708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.532 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.101871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.101878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.102263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.102272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.102528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.102538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.102821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.102828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.103132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.103139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.103395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.103406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.103753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.103760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.104114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.104123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.104488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.104496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.104781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.104788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.105163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.105170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.105534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.105543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.105880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.105888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.106290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.106298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.106623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.106632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.106947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.106954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.107325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.107334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.107751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.107760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.108144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.108152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.108523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.108531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.108945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.108953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.109321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.109328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.109764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.109772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.110128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.110137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.110514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.110522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.110934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.110941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.111299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.111307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.111674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.111682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.112040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.112049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.112417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.112426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.112799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.112807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.113172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.533 [2024-06-07 16:39:11.113180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.533 qpair failed and we were unable to recover it. 00:30:44.533 [2024-06-07 16:39:11.113581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.113589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.114020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.114027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.114213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.114221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.114583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.114591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.115001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.115009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.115383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.115391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.115758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.115766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.115996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.116003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.116376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.116385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.116793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.116801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.117179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.117188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.117529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.117537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.117781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.117790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.118148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.118157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.118526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.118534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.118800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.118808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.119179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.119187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.119581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.119589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.119846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.119853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.120218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.120225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.120556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.120564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.120928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.120936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.121295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.121303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.121663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.121671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.122031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.122039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.122408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.122416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.122776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.122784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.123028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.123036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.123286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.123293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.123669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.123677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.124035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.124043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.124410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.124418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.124777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.124785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.125159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.125168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.125521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.125529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.125895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.125903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.534 qpair failed and we were unable to recover it. 00:30:44.534 [2024-06-07 16:39:11.126259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.534 [2024-06-07 16:39:11.126266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.126548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.126557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.126947] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.126954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.127226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.127233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.127618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.127626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.127983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.127992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.128413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.128422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.128777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.128785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.129115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.129123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.129492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.129500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.129781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.129788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.129979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.129987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.130372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.130380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.130576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.130585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.130831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.130840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.131186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.131195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.131588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.131596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.131963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.131971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.132360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.132367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.132733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.132741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.133180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.133187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.133461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.133468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.133829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.133836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.134169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.134177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.134448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.134456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.134853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.134861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.135232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.135241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.135629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.135637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.535 [2024-06-07 16:39:11.136009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.535 [2024-06-07 16:39:11.136016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.535 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.136373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.136382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.136713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.136721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.137085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.137092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.137456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.137464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.137562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.137570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.137900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.137908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.138259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.138268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.138605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.138613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.138977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.138985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.139287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.139295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.139768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.139777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.140164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.140173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.140456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.140465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.140853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.140861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.141210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.141218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.141607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.141615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.141982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.141990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.142360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.142369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.142747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.142756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.143135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.143143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.143415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.143424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.143837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.143845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.144190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.144198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.144585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.144593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.144964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.144971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.145335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.145345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.145607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.145615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.145980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.145988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.146184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.146193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.146537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.146545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.146894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.146903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.147269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.147278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.147677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.147685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.148050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.536 [2024-06-07 16:39:11.148059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.536 qpair failed and we were unable to recover it. 00:30:44.536 [2024-06-07 16:39:11.148395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.148407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.148768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.148775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.149141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.149149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.149523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.149536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.149920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.149930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.150301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.150310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.150735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.150744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.151018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.151026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.151381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.151389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.151651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.151660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.152019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.152028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.152382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.152389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.152677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.152686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.153060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.153069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.153224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.153232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.153555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.153565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.153815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.153824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.154188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.154197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.154576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.154586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.154974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.154983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.155238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.155248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.155515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.155524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.155880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.155889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.156257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.156267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.156553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.156561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.156975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.156984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.157239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.157248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.157627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.157636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.157887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.157896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.158281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.158289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.158580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.158589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.158835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.158846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.159214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.159223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.159605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.159614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.159992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.537 [2024-06-07 16:39:11.160001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.537 qpair failed and we were unable to recover it. 00:30:44.537 [2024-06-07 16:39:11.160363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.160371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.160761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.160770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.161115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.161124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.161494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.161503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.161910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.161919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.162309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.162317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.162456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.162466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.162821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.162830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.162979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.162988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.163240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.163249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.163513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.163522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.164005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.164013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.164273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.164281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.164582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.164591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.164985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.164994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.165381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.165390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.165512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.165521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.165753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.165762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.166142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.166151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.166532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.166541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.166920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.166930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.167153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.167163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.167565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.167574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.168009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.168018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.168392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.168400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.168780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.168789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.169181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.169190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.169539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.169548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.169907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.169916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.170182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.170191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.170469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.170478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.170829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.170838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.171218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.171227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.171539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.171548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.171797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.171805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.172062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.172070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.172443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.172453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.538 [2024-06-07 16:39:11.172828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.538 [2024-06-07 16:39:11.172836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.538 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.172983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.172990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.173317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.173325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.173598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.173607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.173879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.173886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.174241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.174249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.174508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.174517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.174882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.174890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.175278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.175286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.175557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.175565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.175956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.175964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.176217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.176224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.176619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.176627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.176831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.176839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.177059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.177068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.177464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.177473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.177921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.177929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.178305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.178313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.178733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.178742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.179109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.179118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.179366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.179375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.179623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.179632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.179867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.179875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.180132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.180140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.180433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.180441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.180833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.180841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.181233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.181241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.181629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.181638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.181927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.181936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.182295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.182303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.182633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.182642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.183007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.183016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.183370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.183378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.183717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.183725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.183876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.183884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.184145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.184153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.184493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.184501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.184762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.184770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.539 [2024-06-07 16:39:11.185219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.539 [2024-06-07 16:39:11.185227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.539 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.185306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.185317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.185708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.185716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.186086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.186095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.186470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.186478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.186837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.186845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.187077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.187084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.187217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.187225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.187608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.187616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.187985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.187994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.188259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.188267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.188593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.188601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.188835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.188842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.189195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.189203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.189587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.189596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.189848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.189857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.190240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.190250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.190636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.190644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.190952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.190961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.191328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.191336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.191748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.191757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.192148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.192156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.192425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.192434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.192585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.192594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.192791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.192801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.193349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.193367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.193575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.193584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.540 qpair failed and we were unable to recover it. 00:30:44.540 [2024-06-07 16:39:11.193990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.540 [2024-06-07 16:39:11.193998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.194390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.194398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.194786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.194794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.195069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.195076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.195454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.195463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.195854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.195862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.196128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.196137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.196470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.196478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.196762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.196770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.197143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.197150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.197282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.197289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.197729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.197737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.198103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.198111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.198497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.198506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.198845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.198855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.199232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.199240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.199524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.199531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.199914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.199922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.200087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.200095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.200418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.200427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.200810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.200817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.201206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.201214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.201478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.201486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.201868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.201877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.201977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.201984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.202434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.202515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.202845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.202878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.203184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.203213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.203655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.203744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d0000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.204173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.204183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.204638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.204667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.204942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.204951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.205193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.205203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.205603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.205611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.205999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.206007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.206371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.206380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.206484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.541 [2024-06-07 16:39:11.206493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.541 qpair failed and we were unable to recover it. 00:30:44.541 [2024-06-07 16:39:11.206844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.206852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.207073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.207081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.207327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.207335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.207565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.207573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.207950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.207958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.208191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.208200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.208395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.208414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.208776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.208784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.209175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.209183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.209532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.209541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.209815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.209823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.210074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.210082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.210477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.210485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.210856] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.210866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.211231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.211239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.211707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.211716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.212104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.212113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.212513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.212524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.212912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.212921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.213307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.213316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.213602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.213611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.213997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.214004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.214369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.214378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.214660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.214667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.214933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.214941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.215328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.215336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.215542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.215550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.215930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.215939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.216327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.216335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.216662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.216671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.216956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.216964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.217378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.217386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.217756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.217764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.218095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.218103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.218468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.218477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.542 qpair failed and we were unable to recover it. 00:30:44.542 [2024-06-07 16:39:11.218862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.542 [2024-06-07 16:39:11.218870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.219245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.219254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.219607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.219615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.219970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.219978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.220340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.220348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.220674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.220682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.221046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.221054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.221498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.221506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.221878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.221885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.222250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.222258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.222618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.222626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.223008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.223016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.223380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.223389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.223789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.223797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.224183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.224191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.224619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.224647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.225028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.225039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.225397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.225412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.225761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.225769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.226127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.226134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.226610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.226638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.227016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.227025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.227418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.227431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.227683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.227692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.228061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.228069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.228440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.228448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.228763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.228772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.229155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.229163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.229524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.229532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.229893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.229901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.230284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.230291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.230717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.230725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.231096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.231104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.231469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.231476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.543 qpair failed and we were unable to recover it. 00:30:44.543 [2024-06-07 16:39:11.231847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.543 [2024-06-07 16:39:11.231855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.232235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.232244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.232610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.232619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.232983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.232992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.233345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.233353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.233722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.233730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.234133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.234141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.234509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.234517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.234892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.234900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.235059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.235067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.235409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.235418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.235780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.235788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.236171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.236179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.236547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.236556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.236921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.236930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.237297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.237305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.237679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.237687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.238059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.238066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.238430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.238438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.238803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.238811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.239196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.239204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.239573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.239582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.239949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.239957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.240191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.240198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.240546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.240554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.240903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.240911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.241270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.241278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.241753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.241761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.242108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.242117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.242482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.242491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.242897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.242906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.243176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.243185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.243576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.243586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.243962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.243970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.244339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.244347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.244690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.244698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.244965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.244973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.245335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.245342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.544 qpair failed and we were unable to recover it. 00:30:44.544 [2024-06-07 16:39:11.245710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.544 [2024-06-07 16:39:11.245718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.246082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.246090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.246467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.246475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.246819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.246828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.247190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.247197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.247576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.247584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.247882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.247890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.248261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.248270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.248643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.248651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.249023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.249031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.249382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.249391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.249760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.249768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.250133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.250141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.250640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.250669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.251062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.251073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.251445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.251454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.251814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.251822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.252127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.252138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.252542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.252551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.252906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.252915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.253277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.253285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.253523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.253531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.253914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.253922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.254290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.254298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.254569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.254577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.254960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.254968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.255325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.255332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.255670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.255678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.256050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.545 [2024-06-07 16:39:11.256058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.545 qpair failed and we were unable to recover it. 00:30:44.545 [2024-06-07 16:39:11.256422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.256430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.256767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.256774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.257137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.257145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.257568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.257578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.257935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.257943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.258326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.258334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.258700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.258708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.259072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.259081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.259563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.259571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.259844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.259851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.260215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.260223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.260681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.260689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.260884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.260893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.261058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.261067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.261505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.261513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.261883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.261891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.262255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.262263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.262629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.262638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.262889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.262897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.263269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.263277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.263607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.263615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.264007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.264015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.264393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.264405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.264658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.264665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.265042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.265050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.265424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.265432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.265832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.265840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.266204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.266213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.266477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.266486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.266814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.266823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.267192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.267200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.267566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.267574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.267824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.267832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.268182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.268190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.546 [2024-06-07 16:39:11.268540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.546 [2024-06-07 16:39:11.268549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.546 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.268816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.268824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.269166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.269174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.269575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.269583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.269961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.269970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.270327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.270335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.270680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.270689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.271087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.271095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.271462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.271470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.271843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.271851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.272095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.272102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.272467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.272475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.272838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.272846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.273178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.273187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.273625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.273633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.273991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.274000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.274361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.274369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.274723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.274732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.275188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.275195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.275530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.275537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.275816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.275824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.276170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.276179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.276546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.276554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.276916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.276923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.277287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.277294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.277680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.277688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.278111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.278118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.278490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.278498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.278869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.278877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.279108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.279116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.279485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.279494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.279864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.279872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.280270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.280279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.280649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.280658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.280967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.547 [2024-06-07 16:39:11.280976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.547 qpair failed and we were unable to recover it. 00:30:44.547 [2024-06-07 16:39:11.281362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.281369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.281776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.281785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.282127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.282136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.282474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.282482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.282846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.282854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.283217] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.283225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.283583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.283590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.283963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.283971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.284326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.284334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.284701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.284709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.285045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.285054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.285327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.285335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.285654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.285662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.286029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.286037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.286405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.286413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.286664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.286671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.287030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.287039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.287396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.287406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.287768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.287776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.288172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.288181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.288419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.288428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.288779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.288787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.289023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.289031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.289269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.289277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.289637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.289645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.289884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.289892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.290220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.290228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.290595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.290604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.290990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.290998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.291360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.291369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.291570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.291580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.291906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.291914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.292301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.548 [2024-06-07 16:39:11.292309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.548 qpair failed and we were unable to recover it. 00:30:44.548 [2024-06-07 16:39:11.292680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.292688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.293101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.293108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.293494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.293503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.293917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.293924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.294297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.294306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.294764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.294772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.295127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.295137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.295331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.295340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.295776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.295785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.296155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.296164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.296625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.296633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.296987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.296995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.297246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.297254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.297553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.297560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.297936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.297944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.298332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.298339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.298610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.298618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.298979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.298987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.299341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.299349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.299626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.299634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.300000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.300008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.300360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.300367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.300601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.300609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.300997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.301005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.301373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.301381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.301758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.301766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.302179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.302187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.302552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.302561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.302877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.302885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.303212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.303220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.303456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.303464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.549 qpair failed and we were unable to recover it. 00:30:44.549 [2024-06-07 16:39:11.303876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.549 [2024-06-07 16:39:11.303884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.304249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.304257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.304628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.304637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.305002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.305010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.305398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.305416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.305764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.305771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.306156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.306164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.306726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.306754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.307142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.307151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.307622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.307651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.308042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.308052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.308252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.308260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.308618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.308627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.309007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.309016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.309386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.309394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.309665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.309676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.310080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.310101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.310464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.310474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.310832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.310841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.311105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.311112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.311349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.311358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.311677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.311686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.311911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.311919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.312288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.312296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.312683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.312692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.312902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.312910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.313382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.313390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.313762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.313771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.550 [2024-06-07 16:39:11.314185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.550 [2024-06-07 16:39:11.314193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.550 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.314578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.314593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.314838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.314846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.315153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.315161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.315510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.315519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.315865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.315873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.316130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.316138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.316460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.316468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.316716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.316723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.317074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.317082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.317351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.317359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.317539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.317547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.317823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.317831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.318030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.318039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.318393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.318405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.318694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.318702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.319089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.319097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.319408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.319416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.319642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.319649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.319905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.319913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.320307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.320315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.320677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.320685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.321070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.321078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.321495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.321513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.321709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.321718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.321984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.321993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.322316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.322323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.322642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.322654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.323043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.323051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.323408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.323417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.323771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.323779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.324043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.324051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.324336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.551 [2024-06-07 16:39:11.324344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.551 qpair failed and we were unable to recover it. 00:30:44.551 [2024-06-07 16:39:11.324709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.324718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.325079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.325088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.325347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.325355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.325760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.325768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.326126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.326134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.326391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.326399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.326827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.326836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.327192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.327201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.327588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.327596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.327837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.327845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.328237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.328246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.328528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.328536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.328903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.328911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.329212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.329220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.329445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.329454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.329832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.329840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.330253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.330261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.330544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.330553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.330935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.330943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.331214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.331222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.331604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.331612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.331984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.331994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.332386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.332394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.332687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.332695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.333079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.333087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.333452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.333460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.333855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.333863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.334128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.334136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.334503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.334512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.334903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.334912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.335300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.335308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.335691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.335699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.336063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.336071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.336439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.336448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.336800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.336810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.337193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.552 [2024-06-07 16:39:11.337201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.552 qpair failed and we were unable to recover it. 00:30:44.552 [2024-06-07 16:39:11.337654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.337662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.337895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.337904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.338289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.338297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.338376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.338383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.338680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.338688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.339052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.339060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.339437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.339446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.339820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.339828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.340197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.340205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.340582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.340591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.340838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.340846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.341227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.341235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.341531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.341539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.341810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.341818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.342185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.342193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.342389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.342397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.342603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.342611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.342960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.342968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.343336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.343345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.343769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.343777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.344128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.344136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.344408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.344417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.344862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.344869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.345222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.345230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.345726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.345755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.346134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.346144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.346627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.346656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.347050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.347060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.347447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.347455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.347733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.347741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.348116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.348125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.348434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.348443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.348842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.348850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.349214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.349222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.349590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.349598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.349857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.349864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.350253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.350261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.350634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.553 [2024-06-07 16:39:11.350642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.553 qpair failed and we were unable to recover it. 00:30:44.553 [2024-06-07 16:39:11.351006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.351017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.351384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.351391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.351659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.351666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.352033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.352040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.352416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.352424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.352777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.352784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.353049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.353057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.353431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.353439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.353832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.353839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.354156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.354164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.354534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.354542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.354909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.354916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.355283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.355291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.355710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.355718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.356104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.356113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.356483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.356491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.356778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.356786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.357129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.357137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.357338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.357347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.357728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.357736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.358102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.358110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.358509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.358517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.358894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.358902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.359267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.359275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.359558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.359566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.359949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.359957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.360342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.360350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.360691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.360699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.361067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.361075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.361442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.361450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.361725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.361733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.362097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.362105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.362472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.362481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.362663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.362672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.362862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.362870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.554 [2024-06-07 16:39:11.363196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.554 [2024-06-07 16:39:11.363204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.554 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.363571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.363579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.363953] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.363961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.364231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.364238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.364610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.364617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.365001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.365010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.365381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.365388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.365645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.365653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.365791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.365799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.366134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.366141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.366517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.366525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.366781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.366789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.367154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.367162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.367515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.367522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.367889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.367897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.368286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.368294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.368689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.368697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.369128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.369138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.369507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.369515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.369883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.369892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.370256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.370264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.370534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.370541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.370908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.370915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.371142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.371151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.371569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.371577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.371850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.371857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.555 [2024-06-07 16:39:11.372224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.555 [2024-06-07 16:39:11.372232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.555 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.372510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.372519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.372839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.372848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.373229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.373236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.373604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.373612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.373963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.373971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.374337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.374345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.374716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.374724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.375083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.375091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.375480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.375488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.375870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.375879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.376242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.376251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.376592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.376600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.376955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.376964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.377407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.377416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.377844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.829 [2024-06-07 16:39:11.377872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.829 qpair failed and we were unable to recover it. 00:30:44.829 [2024-06-07 16:39:11.378252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.378262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.378739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.378768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.379138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.379148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.379673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.379706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.380092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.380102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.380609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.380638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.381038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.381048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.381428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.381436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.381798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.381806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.382178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.382186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.382450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.382458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.382715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.382722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.383078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.383086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.383472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.383480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.383827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.383835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.384220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.384228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.384415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.384423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.384871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.384880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.385211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.385219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.385544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.385553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.385928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.385937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.386324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.386332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.386567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.386575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.386954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.386962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.387320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.387328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.387593] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.387601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.387857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.387865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.388098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.388107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.388380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.388388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.388642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.388651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.389006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.389015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.389376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.389385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.389670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.389679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.390060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.390069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.390433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.390442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.390795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.390803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.391048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.830 [2024-06-07 16:39:11.391056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.830 qpair failed and we were unable to recover it. 00:30:44.830 [2024-06-07 16:39:11.391295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.391303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.391637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.391646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.392010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.392018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.392263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.392271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.392541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.392549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.392922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.392929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.393290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.393299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.393569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.393576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.393946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.393954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.394316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.394325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.394474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.394483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.394866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.394875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.395254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.395263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.395626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.395634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.396032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.396039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.396437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.396445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.396815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.396823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.397179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.397187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.397578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.397586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.397801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.397809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.398170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.398178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.398536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.398543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.398772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.398781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.399043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.399050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.399280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.399288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.399633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.399641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.399998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.400006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.400364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.400372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.400667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.400674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.401050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.401057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.401436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.401445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.401673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.401681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.402100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.402108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.402503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.402512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.402846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.402855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.403205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.403213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.403522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.831 [2024-06-07 16:39:11.403530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.831 qpair failed and we were unable to recover it. 00:30:44.831 [2024-06-07 16:39:11.403896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.403903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.404142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.404150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.404511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.404520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.404893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.404902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.405171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.405180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.405505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.405513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.405879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.405888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.406112] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.406120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.406530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.406538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.406803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.406812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.407054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.407063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.407278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.407287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.407512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.407520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.407779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.407787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.408037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.408045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.408267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.408276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.408472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.408482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.408833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.408841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.409211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.409219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.409597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.409606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.409979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.409987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.410361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.410368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.410716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.410725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.411165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.411173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.411523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.411530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.411754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.411762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.412143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.412151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.412452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.412460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.412909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.412917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.413181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.413189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.413605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.413613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.413885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.413892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.414260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.414267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.414396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.414408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.414795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.414803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.832 [2024-06-07 16:39:11.415174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.832 [2024-06-07 16:39:11.415182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.832 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.415457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.415465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.415850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.415859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.416244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.416252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.416660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.416668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.417027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.417036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.417425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.417434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.417684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.417692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.418071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.418079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.418474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.418483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.418578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.418584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.418945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.418954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.419322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.419331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.419588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.419596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.419989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.419999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.420411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.420420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.420784] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.420792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.421043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.421051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.421410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.421419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.421781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.421789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.422060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.422067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.422302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.422310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.422637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.422647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.422964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.422972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.423241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.423249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.423421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.423431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.423781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.423789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.424160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.424168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.424493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.424501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.424883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.424890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.425271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.425279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.425708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.425716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.426055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.426064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.426328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.426337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.833 [2024-06-07 16:39:11.426502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.833 [2024-06-07 16:39:11.426510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.833 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.426864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.426873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.427210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.427219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.427714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.427743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.428118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.428128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.428665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.428694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.428972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.428981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.429360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.429368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.429630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.429639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.429902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.429910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.430287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.430295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.430692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.430700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.431059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.431068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.431452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.431461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.431855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.431864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.432228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.432235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.432506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.432515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.432781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.432789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.433184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.433192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.433625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.433633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.434039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.434047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.434329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.434336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.434782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.434790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.435142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.435149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.435547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.435555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.435812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.435820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.436188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.436197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.436580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.436588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.436869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.436876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.437244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.437256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.834 qpair failed and we were unable to recover it. 00:30:44.834 [2024-06-07 16:39:11.437437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.834 [2024-06-07 16:39:11.437444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.437657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.437666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.437902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.437911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.438293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.438302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.438694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.438703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.438922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.438932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.439302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.439311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.439570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.439579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.439999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.440009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.440265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.440274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.440653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.440662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.441055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.441063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.441428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.441437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.441826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.441835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.442208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.442217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.442590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.442599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.442965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.442974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.443345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.443355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.443714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.443723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.443960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.443969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.444305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.444314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.444604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.444613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.444996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.445005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.445393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.445409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.445693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.445702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.446116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.446124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.446481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.446491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.446847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.446856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.447220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.447229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.447600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.447609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.447834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.447843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.448168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.448177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.448531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.448542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.448928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.448938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.449353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.449362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.449747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.449755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.835 [2024-06-07 16:39:11.450070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.835 [2024-06-07 16:39:11.450079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.835 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.450331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.450339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.450710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.450718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.451070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.451079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.451435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.451445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.451713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.451721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.452068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.452077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.452406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.452414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.452796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.452804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.453176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.453185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.453554] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.453562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.453928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.453936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.454328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.454335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.454708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.454715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.455038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.455046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.455430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.455438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.455825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.455833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.456064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.456072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.456442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.456450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.456790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.456799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.457191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.457200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.457568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.457577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.457984] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.457992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.458379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.458387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.458761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.458769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.459141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.459150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.459408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.459416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.459801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.459809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.460170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.460178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.460618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.460646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.461025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.461035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.461427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.461436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.461821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.836 [2024-06-07 16:39:11.461830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.836 qpair failed and we were unable to recover it. 00:30:44.836 [2024-06-07 16:39:11.462200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.462208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.462306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.462313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.462583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.462591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.462968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.462976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.463253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.463261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.463643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.463652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.464044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.464052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.464863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.464881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.465260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.465269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.465649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.465678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.466038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.466048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.466436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.466445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.466827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.466837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.467228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.467237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.467597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.467605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.467974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.467983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.468345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.468353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.468715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.468723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.469064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.469072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.469453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.469462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.470315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.470332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.470775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.470785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.471108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.471115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.471475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.471483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.471860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.471867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.472230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.472239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.472685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.472694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.473050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.473059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.473423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.473434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.473765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.473773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.474111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.474119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.474479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.474487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.475262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.475280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.475641] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.837 [2024-06-07 16:39:11.475651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.837 qpair failed and we were unable to recover it. 00:30:44.837 [2024-06-07 16:39:11.476037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.476045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.476413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.476421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.476778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.476787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.477153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.477161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.477533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.477542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.477721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.477731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.477987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.477994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.478380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.478387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.478757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.478766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.479119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.479127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.479571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.479580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.479946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.479955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.480342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.480351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.480738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.480747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.481059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.481068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.481424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.481433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.481786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.481794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.482157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.482165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.482532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.482541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.482915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.482922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.483251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.483259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.483628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.483636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.483980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.483987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.484352] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.484359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.484717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.484726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.485089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.485096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.485442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.485450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.485841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.485849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.486079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.486087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.486419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.486427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.486666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.486674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.487018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.487026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.487254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.487262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.487626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.487634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.838 [2024-06-07 16:39:11.487999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.838 [2024-06-07 16:39:11.488009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.838 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.488374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.488382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.488748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.488757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.489121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.489129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.489493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.489501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.489885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.489894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.490278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.490287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.490640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.490649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.491014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.491021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.491384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.491391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.491751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.491759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.492150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.492158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.492523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.492532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.492920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.492928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.493209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.493217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.493616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.493624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.494009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.494017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.494406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.494414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.494772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.494780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.495117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.495125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.495507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.495515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.495827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.495835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.496223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.496231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.496428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.496439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.496724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.496732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.497126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.497142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.497488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.497498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.497739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.497749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.498019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.498026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.498390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.498398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.498763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.498771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.839 [2024-06-07 16:39:11.499149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.839 [2024-06-07 16:39:11.499157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.839 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.499560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.499568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.499939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.499946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.500389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.500397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.500755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.500762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.501202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.501211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.501573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.501582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.501954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.501962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.502348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.502356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.502719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.502729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.503064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.503073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.503436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.503444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.503839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.503846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.504211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.504218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.504590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.504598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.504962] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.504970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.505355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.505364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.505753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.505761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.506126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.506134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.506505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.506513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.506868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.506876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.507144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.507153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.507532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.507540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.507895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.507904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.508041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.508050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.508432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.508440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.508639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.508648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.508844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.508852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.509200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.509207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.509579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.509588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.510000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.510008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.510373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.510381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.510775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.510786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.511162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.511170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.511535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.511543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.511862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.511870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.512263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.512272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.512633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.840 [2024-06-07 16:39:11.512642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.840 qpair failed and we were unable to recover it. 00:30:44.840 [2024-06-07 16:39:11.512874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.512882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.513163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.513171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.513415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.513423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.513603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.513611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.513994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.514002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.514371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.514379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.514741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.514749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.515113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.515121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.515489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.515498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.515866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.515874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.516263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.516271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.516640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.516651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.517015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.517023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.517329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.517338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.517699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.517707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.518020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.518029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.518388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.518397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.518762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.518771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.519156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.519164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.519527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.519534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.519873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.519881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.520288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.520296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.520680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.520688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.521051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.521059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.521424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.521433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.521809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.521817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.522103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.522112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.522513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.522522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.522730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.522738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.523110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.523118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.523506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.523515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.523732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.523740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.524133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.524141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.524496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.524504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.841 [2024-06-07 16:39:11.524754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.841 [2024-06-07 16:39:11.524761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.841 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.525133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.525141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.525586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.525594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.525949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.525957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.526343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.526351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.526770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.526780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.527147] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.527156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.527349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.527358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.527712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.527721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.528085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.528094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.528332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.528339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.528660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.528669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.529057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.529065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.529429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.529438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.529696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.529704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.530070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.530078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.530465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.530473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.530858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.530869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.531234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.531242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.531610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.531619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.532003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.532011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.532373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.532381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.532747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.532756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.533119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.533127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.533507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.533516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.533906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.533914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.534109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.534117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.534490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.534498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.534764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.534771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.534992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.535000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.535134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.535142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.535324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.535332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.535695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.535704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.536067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.536076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.536439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.536447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.536697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.842 [2024-06-07 16:39:11.536705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.842 qpair failed and we were unable to recover it. 00:30:44.842 [2024-06-07 16:39:11.537066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.537074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.537342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.537350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.537725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.537733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.538103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.538110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.538501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.538509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.538732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.538740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.539106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.539114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.539486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.539494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.539860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.539869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.540245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.540253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.540619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.540628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.540998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.541006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.541406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.541414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.541796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.541804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.542152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.542160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.542530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.542539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.542923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.542931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.543284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.543292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.543625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.543634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.543992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.544001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.544181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.544190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.544516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.544526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.544898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.544906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.545275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.545282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.545645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.545654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.546038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.546046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.546410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.546419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.546759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.546767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.546969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.546977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.547221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.547229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.547607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.547615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.548015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.548023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.548377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.548384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.548757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.548765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.549152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.549160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.549625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.549654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.843 qpair failed and we were unable to recover it. 00:30:44.843 [2024-06-07 16:39:11.550028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.843 [2024-06-07 16:39:11.550039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.550436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.550444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.550826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.550833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.551212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.551220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.551591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.551599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.551994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.552002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.552277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.552285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.552649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.552657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.553018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.553025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.553289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.553296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.553640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.553648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.554019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.554026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.554383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.554392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.554758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.554767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.555133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.555142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.555376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.555385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.555747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.555755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.556136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.556144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.556377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.556386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.556756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.556764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.557131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.557138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.557640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.557668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.558056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.558066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.558443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.558453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.558839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.558847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.559205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.559216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.559587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.559595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.559956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.559964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.560328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.560336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.844 [2024-06-07 16:39:11.560633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.844 [2024-06-07 16:39:11.560641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.844 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.561005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.561013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.561367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.561375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.561750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.561759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.562150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.562158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.562434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.562441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.562808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.562816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.563180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.563188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.563568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.563576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.563939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.563948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.564312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.564319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.564677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.564686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.565072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.565081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.565477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.565485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.565867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.565875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.566239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.566248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.566518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.566526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.566890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.566899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.567261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.567269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.567635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.567643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.567991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.567999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.568360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.568368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.568714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.568722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.569087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.569094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.569444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.569452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.569815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.569823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.570096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.570104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.570420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.570428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.570775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.570784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.571172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.571180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.571532] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.571541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.571909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.571916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.572304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.572312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.572678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.572687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.573066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.573074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.845 [2024-06-07 16:39:11.573443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.845 [2024-06-07 16:39:11.573451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.845 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.573832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.573842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.574266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.574273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.574678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.574686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.575051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.575059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.575411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.575420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.575724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.575731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.576107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.576114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.576491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.576499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.576866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.576875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.577224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.577232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.577436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.577446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.577827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.577836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.578032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.578040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.578426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.578435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.578816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.578824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.579181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.579189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.579441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.579449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.579806] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.579815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.580175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.580183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.580572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.580580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.580948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.580956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.581350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.581358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.581738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.581747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.582110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.582119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.582503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.582511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.582873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.582881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.583242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.583250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.583614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.583623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.584011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.584020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.584386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.584394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.584652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.584660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.585024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.585032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.585380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.585388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.585749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.585757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.586032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.586039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.586405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.586414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.586618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.846 [2024-06-07 16:39:11.586626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.846 qpair failed and we were unable to recover it. 00:30:44.846 [2024-06-07 16:39:11.587026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.587035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.587416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.587425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.587827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.587835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.588192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.588202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.588618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.588647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.589021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.589030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.589460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.589468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.589867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.589875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.590259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.590266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.590629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.590637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.590850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.590857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.591140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.591148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.591560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.591568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.591957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.591965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.592337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.592345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.592709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.592718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.592955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.592964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.593332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.593340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.593700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.593709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.594125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.594133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.594466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.594474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.594664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.594673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.595041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.595049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.595282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.595291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.595611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.595619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.595982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.595990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.596353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.596360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.596723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.596732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.597119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.597126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.597511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.597519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.597903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.597912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.598303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.598311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.598685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.598693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.599055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.599063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.599334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.599343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.847 [2024-06-07 16:39:11.599706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.847 [2024-06-07 16:39:11.599714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.847 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.600083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.600092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.600459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.600467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.600835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.600843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.601199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.601207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.601597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.601605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.601997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.602005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.602413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.602421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.602762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.602771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.603176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.603184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.603465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.603473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.603843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.603851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.604246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.604254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.604526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.604534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.604917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.604925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.605244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.605252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.605494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.605502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.605884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.605891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.606155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.606164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.606533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.606541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.606908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.606917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.607260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.607268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.607624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.607632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.607964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.607973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.608355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.608363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.608706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.608715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.609075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.609083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.609446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.609454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.609837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.609845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.610229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.610237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.610469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.610478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.610802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.610810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.611170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.611178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.611618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.611626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.612026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.612034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.612398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.848 [2024-06-07 16:39:11.612410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.848 qpair failed and we were unable to recover it. 00:30:44.848 [2024-06-07 16:39:11.612788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.612795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.613181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.613189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.613690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.613718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.614093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.614103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.614496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.614504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.614888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.614896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.615265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.615273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.615646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.615655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.616011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.616019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.616388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.616396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.616692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.616700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.617101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.617109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.617617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.617646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.617996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.618005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.618374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.618383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.618760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.618768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.619159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.619167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.619636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.619664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.620040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.620050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.620433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.620442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.620826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.620834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.621221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.621229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.621667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.621675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.622046] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.622053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.622442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.622450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.622788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.622796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.623085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.623093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.623456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.623465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.623857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.623865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.624219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.624226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.624588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.624595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.849 [2024-06-07 16:39:11.624830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.849 [2024-06-07 16:39:11.624839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.849 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.625109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.625116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.625368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.625375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.625722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.625731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.626096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.626103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.626469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.626477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.626834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.626843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.627206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.627216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.627695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.627727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.627905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.627914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.628314] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.628322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.628522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.628530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.628916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.628924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.629315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.629323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.629691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.629700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.630071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.630078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.630449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.630457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.630794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.630802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.631164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.631172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.631531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.631541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.631919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.631928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.632316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.632324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.632535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.632543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.632916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.632924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.633295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.633303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.633682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.633690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.634074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.634081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.634438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.634447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.634821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.634829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.635103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.635110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.635497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.635504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.635916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.635924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.636287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.636295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.636658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.636667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.637032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.637039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.637426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.637434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.850 [2024-06-07 16:39:11.637686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.850 [2024-06-07 16:39:11.637694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.850 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.638047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.638055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.638282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.638289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.638754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.638762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.639128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.639136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.639335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.639344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.639657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.639666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.640033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.640041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.640408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.640417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.640760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.640767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.641158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.641166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.641528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.641537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.641925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.641935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.642152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.642160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.642524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.642532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.642915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.642924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.643287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.643296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.643619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.643627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.644025] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.644034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.644379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.644388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.644759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.644768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.645152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.645161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.645517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.645526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.645896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.645904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.646269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.646276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.646661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.646669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.647023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.647032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.647394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.647406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.647765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.647773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.648162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.648169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.648665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.648693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.649071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.649081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.649453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.649462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.649833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.649841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.650257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.650265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.650630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.650638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.650792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.650799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.851 qpair failed and we were unable to recover it. 00:30:44.851 [2024-06-07 16:39:11.651188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.851 [2024-06-07 16:39:11.651195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.651577] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.651584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.651951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.651959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.652312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.652320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.652700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.652708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.653068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.653076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.653444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.653452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.653840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.653849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.654096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.654103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.654455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.654463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.654880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.654888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.655245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.655253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.655628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.655636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.655985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.655994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.656367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.656375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.656765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.656775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.657100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.657108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.657444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.657452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.657902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.657910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.658263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.658272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.658606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.658613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.658994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.659002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.659271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.659279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.659549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.659557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.659918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.659925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.660319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.660327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.660691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.660700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.661142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.661150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.661506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.661514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.661858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.661866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.662258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.662268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.662632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.662640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.662995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.663004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.663404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.663414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.663837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.663845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.664099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.664107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.852 [2024-06-07 16:39:11.664460] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.852 [2024-06-07 16:39:11.664468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.852 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.664815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.664823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.665020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.665029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.665369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.665377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.665753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.665761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.666111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.666119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.666382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.666389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.666690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.666699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.667058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.667067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.667335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.667343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.667626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.667634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.668010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.668018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.668408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.668417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.668671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.668678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.668927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.668935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.669334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.669344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.669603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.669612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.670008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.670017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.670382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.670390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:44.853 [2024-06-07 16:39:11.670759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:44.853 [2024-06-07 16:39:11.670769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:44.853 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.671148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.671157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.671508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.671516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.671983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.671992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.672349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.672357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.672594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.672602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.672795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.672803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.673097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.673105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.673456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.673465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.673813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.673820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.674243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.674251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.674475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.674483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.674763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.674771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.675157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.675165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.675397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.675408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.675602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.675611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.675937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.675946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.676297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.676307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.676682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.676690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.677055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.677065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.677374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.677383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.677660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.677668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.678022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.678030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.678258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.678266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.678642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.678650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.678934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.678944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.679344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.679352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.679732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.679740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.680144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.680152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.680379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.680387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.680623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.680632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.681020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.681029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.681260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.681268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.129 qpair failed and we were unable to recover it. 00:30:45.129 [2024-06-07 16:39:11.681496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.129 [2024-06-07 16:39:11.681504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.681715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.681724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.682105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.682112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.682442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.682450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.682791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.682799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.683182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.683190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.683564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.683572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.683941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.683951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.684348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.684356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.684617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.684625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.684943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.684952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.685318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.685327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.685601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.685609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.685968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.685976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.686342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.686350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.686692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.686700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.687081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.687089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.687441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.687449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.687832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.687840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.688089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.688097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.688502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.688510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.688758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.688766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.689132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.689139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.689518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.689526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.689757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.689765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.690026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.690034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.690366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.690374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.690723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.690731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.690968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.690976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.691318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.691326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.691518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.691526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.691904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.691912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.692169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.692177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.130 [2024-06-07 16:39:11.692560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.130 [2024-06-07 16:39:11.692567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.130 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.692935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.692943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.693303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.693312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.693685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.693693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.694087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.694096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.694438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.694446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.694719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.694727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.694968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.694975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.695338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.695345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.695542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.695551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.695902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.695909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.696284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.696291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.696673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.696681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.696950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.696959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.697324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.697334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.697696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.697705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.697987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.697994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.698362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.698370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.698629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.698637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.698872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.698880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.699241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.699249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.699627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.699634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.699989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.699997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.700357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.700364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.700701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.700709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.701151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.701160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.701547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.701555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.701927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.701936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.702313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.702321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.702691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.702699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.703064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.703073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.703347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.703355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.703701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.703709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.703955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.703963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.704343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.704351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.704706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.704714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.131 [2024-06-07 16:39:11.705063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.131 [2024-06-07 16:39:11.705071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.131 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.705440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.705448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.705830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.705838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.706258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.706266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.706631] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.706638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.707005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.707013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.707379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.707387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.707756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.707764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.708101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.708110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.708481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.708488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.708878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.708885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.709270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.709277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.709555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.709563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.709930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.709938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.710304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.710312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.710677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.710685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.711071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.711080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.711349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.711357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.711721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.711730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.712115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.712123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.712387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.712395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.712747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.712755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.713100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.713107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.713372] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.713380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.713745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.713753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.714045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.714054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.714235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.714244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.714605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.714614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.714795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.714803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.715049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.715058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.715298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.715306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.715684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.715692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.716043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.716051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.716441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.716449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.716835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.716844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.132 [2024-06-07 16:39:11.717264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.132 [2024-06-07 16:39:11.717272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.132 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.717632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.717641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.718035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.718042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.718408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.718416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.718760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.718768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.719152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.719160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.719489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.719497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.719870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.719878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.720303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.720312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.720662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.720672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.721037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.721045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.721204] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.721213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.721398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.721411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Write completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Write completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Write completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Write completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Read completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Write completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 Write completed with error (sct=0, sc=8) 00:30:45.133 starting I/O failed 00:30:45.133 [2024-06-07 16:39:11.721690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.133 [2024-06-07 16:39:11.722093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.722109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.722575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.722613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.723013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.723026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.723431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.723452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.723835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.723847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.724227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.724237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.724716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.724755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.725177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.725189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.725679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.725717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.726093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.133 [2024-06-07 16:39:11.726107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.133 qpair failed and we were unable to recover it. 00:30:45.133 [2024-06-07 16:39:11.726443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.726455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.726861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.726872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.727220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.727231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.727586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.727598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.727965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.727976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.728364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.728374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.728765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.728776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.729122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.729137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.729513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.729524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.729944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.729955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.730265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.730275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.730635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.730645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.731009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.731019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.731409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.731419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.731763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.731775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.732143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.732154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.732383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.732394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.732768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.732779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.733144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.733155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.733520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.733530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.733865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.733876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.734278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.734290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.734629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.734640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.735004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.735015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.735378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.735389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.735778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.735790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.736164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.736176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.736536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.736546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.736783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.736793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.737009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.737020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.737388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.737399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.737772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.134 [2024-06-07 16:39:11.737783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.134 qpair failed and we were unable to recover it. 00:30:45.134 [2024-06-07 16:39:11.738149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.738160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.738538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.738549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.738919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.738932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.739298] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.739308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.739692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.739703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.740064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.740075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.740441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.740451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.740842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.740854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.741210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.741220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.741605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.741616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.741968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.741979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.742208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.742220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.742422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.742433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.742675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.742685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.743133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.743144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.743495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.743506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.743875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.743885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.744270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.744281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.744666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.744677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.745058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.745071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.745300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.745310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.745730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.745741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.746143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.746154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.746519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.746530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.746955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.746966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.747235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.747246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.747615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.747626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.747944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.747956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.748340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.748350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.748713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.748726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.749098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.749109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.749475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.749487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.749926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.749937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.750294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.750306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.750574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.750585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.750954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.750964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.135 [2024-06-07 16:39:11.751324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.135 [2024-06-07 16:39:11.751335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.135 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.751703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.751714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.752080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.752090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.752416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.752427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.752767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.752778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.753048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.753059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.753427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.753438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.753847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.753858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.754218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.754229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.754609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.754620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.754994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.755005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.755325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.755336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.755692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.755703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.756089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.756101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.756486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.756497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.756865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.756876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.757118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.757128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.757534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.757545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.757913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.757924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.758283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.758292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.758664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.758675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.759061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.759071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.759327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.759338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.759778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.759788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.760154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.760164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.760538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.760549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.760941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.760952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.761392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.761406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.761736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.761747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.762130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.762140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.762266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.762277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.762552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.762563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.762946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.762957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.763195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.763205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.763596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.763607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.763995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.764005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.764404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.764416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.764785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.136 [2024-06-07 16:39:11.764796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.136 qpair failed and we were unable to recover it. 00:30:45.136 [2024-06-07 16:39:11.765178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.765189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.765432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.765443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.765899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.765911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.766189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.766200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.766442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.766454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.766583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.766595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.766954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.766965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.767228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.767239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.767601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.767612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.767980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.767991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.768380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.768393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.768755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.768766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.769182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.769193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.769564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.769575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.769968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.769979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.770328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.770338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.770567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.770579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.770863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.770874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.771251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.771261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.771696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.771708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.772070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.772082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.772448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.772459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.772833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.772844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.773237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.773252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.773635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.773647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.774023] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.774035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.774428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.774438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.774909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.774920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.775318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.775328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.775709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.775720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.776152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.776163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.776374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.776387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.776727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.776739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.777114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.777125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.777526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.777537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.777911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.137 [2024-06-07 16:39:11.777922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.137 qpair failed and we were unable to recover it. 00:30:45.137 [2024-06-07 16:39:11.778315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.778327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.778705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.778716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.779099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.779110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.779459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.779470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.779877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.779888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.780100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.780111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.780457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.780468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.780838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.780849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.781228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.781239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.781604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.781616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.782012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.782022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.782412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.782422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.782792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.782803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.783163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.783174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.783575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.783589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.783961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.783973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.784332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.784342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.784708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.784718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.785110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.785121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.785531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.785542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.785909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.785920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.786286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.786297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.786680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.786691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.787061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.787072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.787435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.787446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.787826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.787837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.788264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.788275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.788666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.788677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.789043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.138 [2024-06-07 16:39:11.789054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.138 qpair failed and we were unable to recover it. 00:30:45.138 [2024-06-07 16:39:11.789419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.789430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.789812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.789822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.790206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.790218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.790488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.790499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.790751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.790761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.791159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.791169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.791377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.791388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.791778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.791789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.792027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.792038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.792258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.792268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.792652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.792663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.792965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.792976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.793347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.793358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.793645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.793659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.794040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.794050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.794421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.794433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.794817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.794828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.795233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.795244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.795613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.795624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.796074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.796084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.796495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.796506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.796849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.796861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.797231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.797242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.797617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.797628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.798002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.798012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.798410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.798421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.798796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.798808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.799159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.799171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.799649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.799688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.800082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.800095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.800462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.800475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.800849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.800861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.801235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.801247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.801633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.139 [2024-06-07 16:39:11.801644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.139 qpair failed and we were unable to recover it. 00:30:45.139 [2024-06-07 16:39:11.802014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.802024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.802255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.802266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.802627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.802638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.803027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.803038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.803407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.803418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.803759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.803770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.804143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.804155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.804638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.804676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.805060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.805074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.805287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.805301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.805481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.805493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.805872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.805884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.806253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.806265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.806656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.806666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.807042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.807053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.807444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.807455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.807843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.807854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.808222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.808233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.808602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.808614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.808999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.809014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.809416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.809428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.809777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.809788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.810109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.810121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.810354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.810364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.810814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.810825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.811190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.811201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.811589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.811600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.811997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.812007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.812376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.812387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.812748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.812760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.813133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.813144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.813531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.813543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.813929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.813940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.814310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.814320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.814576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.814587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.814978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.814989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.815356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.815367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.140 [2024-06-07 16:39:11.815538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.140 [2024-06-07 16:39:11.815549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.140 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.815988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.815998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.816362] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.816372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.816726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.816737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.816868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.816879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.817281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.817291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.817655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.817666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.818034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.818045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.818427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.818437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.818757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.818769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.819138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.819150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.819461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.819471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.819830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.819841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.820222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.820233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.820574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.820585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.820854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.820865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.821175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.821185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.821543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.821554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.822024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.822035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.822406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.822417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.822756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.822767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.823042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.823053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.823419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.823430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.823809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.823821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.824089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.824099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.824467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.824478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.824850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.824860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.825226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.825237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.825609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.825620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.825988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.826000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.826365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.826376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.826697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.826710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.827082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.827093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.827451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.827463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.827747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.827757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.828139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.828150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.141 [2024-06-07 16:39:11.828524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.141 [2024-06-07 16:39:11.828538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.141 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.828926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.828936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.829326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.829336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.829716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.829726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.830096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.830106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.830466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.830477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.830729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.830741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.831107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.831118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.831503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.831514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.831873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.831884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.832271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.832281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.832740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.832751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.833119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.833131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.833511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.833521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.833880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.833890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.834291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.834301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.834675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.834687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.835052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.835063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.835452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.835463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.835832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.835843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.836216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.836227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.836541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.836552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.836913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.836925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.837367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.837378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.837743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.837753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.838115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.838126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.838483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.838494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.838855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.838865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.839226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.839237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.839509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.839521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.839834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.839845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.840218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.840229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.840588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.840598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.841002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.841013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.841419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.841430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.841725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.841736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.842122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.842133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.842498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.842509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.842891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.142 [2024-06-07 16:39:11.842902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.142 qpair failed and we were unable to recover it. 00:30:45.142 [2024-06-07 16:39:11.843267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.843277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.843703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.843715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.844081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.844092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.844591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.844629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.845007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.845020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.845384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.845396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.845785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.845798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.846186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.846197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.846648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.846687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.847052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.847066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.847438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.847450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.847833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.847844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.848211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.848221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.848586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.848597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.848976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.848987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.849375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.849385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.849761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.849773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.850138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.850149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.850515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.850527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.850895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.850906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.851276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.851287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.851656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.851667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.852030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.852040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.852313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.852323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.852512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.852524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.852857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.852867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.853132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.143 [2024-06-07 16:39:11.853143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.143 qpair failed and we were unable to recover it. 00:30:45.143 [2024-06-07 16:39:11.853536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.853548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.853776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.853788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.854229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.854242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.854600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.854612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.854999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.855010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.855376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.855387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.855751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.855762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.856118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.856129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.856508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.856520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.856896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.856907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.857262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.857274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.857659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.857670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.857978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.857989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.858238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.858249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.858581] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.858593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.858936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.858947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.859339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.859351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.859748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.859760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.860126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.860136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.860367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.860378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.860719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.860730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.861097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.861108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.861481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.861494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.861758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.861768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.862124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.862135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.862370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.862380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.862779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.862790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.863167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.863178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.863564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.863575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.863819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.863832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.864238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.864249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.864618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.864629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.865021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.865032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.865409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.865420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.144 qpair failed and we were unable to recover it. 00:30:45.144 [2024-06-07 16:39:11.865764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.144 [2024-06-07 16:39:11.865775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.866099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.866109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.866502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.866513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.866881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.866891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.867232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.867242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.867610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.867620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.868008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.868019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.868387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.868398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.868730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.868740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.869109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.869119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.869319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.869329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.869543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.869554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.869848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.869858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.870258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.870269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.870590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.870601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.870992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.871002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.871370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.871381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.871751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.871762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.872136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.872148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.872518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.872529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.872882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.872892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.873269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.873279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.873645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.873656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.874087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.874098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.874462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.874473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.874855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.874866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.875262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.875273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.875645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.875657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.875910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.875921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.876241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.876251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.876680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.876690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.877140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.877151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.877616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.877654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.878031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.878044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.878447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.878458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.878897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.878909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.879277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.145 [2024-06-07 16:39:11.879288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.145 qpair failed and we were unable to recover it. 00:30:45.145 [2024-06-07 16:39:11.879654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.879665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.880090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.880100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.880461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.880472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.880747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.880758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.881128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.881139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.881536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.881548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.881842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.881852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.882240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.882250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.882624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.882635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.882965] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.882975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.883348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.883359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.883731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.883743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.884038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.884049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.884394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.884410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.884714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.884726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.885139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.885150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.885600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.885611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.885879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.885890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.886255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.886266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.886636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.886647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.887015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.887026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.887302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.887312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.887695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.887706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.888073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.888085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.888454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.888465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.888892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.888902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.889276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.889289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.889664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.889675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.889949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.889960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.890359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.890369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.890764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.890774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.891009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.891020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.891366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.891377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.891792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.891802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.892167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.892180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.146 [2024-06-07 16:39:11.892650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.146 [2024-06-07 16:39:11.892661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.146 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.892936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.892947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.893334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.893346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.893627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.893638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.893906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.893917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.894348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.894359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.894728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.894740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.895103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.895114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.895480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.895491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.895783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.895794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.896181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.896191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.896540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.896551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.896987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.896998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.897369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.897380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.897759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.897770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.898129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.898139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.898517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.898527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.898912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.898924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.899311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.899325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.899706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.899717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.900080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.900090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.900469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.900480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.900829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.900839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.901206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.901217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.901587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.901598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.901959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.901970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.902360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.902371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.902752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.902763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.903145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.903156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.903424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.903436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.903794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.903806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.904173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.904185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.904443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.904454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.904808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.904819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.905206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.905217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.905601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.905612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.905978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.905989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.906361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.906372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.906741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.147 [2024-06-07 16:39:11.906751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.147 qpair failed and we were unable to recover it. 00:30:45.147 [2024-06-07 16:39:11.907124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.907134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.907509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.907519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.907919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.907932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.908291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.908303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.908709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.908720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.909090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.909101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.909468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.909481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.909844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.909854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.910265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.910276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.910808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.910819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.911188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.911199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.911583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.911622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.911985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.911998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.912384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.912395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.912838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.912849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.913237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.913247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.913737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.913775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.913986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.914001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.914263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.914274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.914650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.914662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.915028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.915040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.915285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.915296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.915642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.915653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.915976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.915986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.916368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.916379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.916755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.916766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.917140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.917151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.917543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.917554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.148 [2024-06-07 16:39:11.917812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.148 [2024-06-07 16:39:11.917823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.148 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.918212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.918223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.918595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.918606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.919024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.919035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.919455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.919466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.919817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.919828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.920196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.920207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.920575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.920586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.920970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.920981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.921351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.921362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.921731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.921743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.922136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.922147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.922513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.922524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.922928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.922939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.923195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.923209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.923565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.923576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.923934] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.923945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.924300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.924311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.924765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.924776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.925167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.925179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.925588] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.925599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.925963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.925974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.926332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.926343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.926570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.926583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.926940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.926951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.927214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.927225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.927495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.927507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.927837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.927847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.928179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.928190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.928531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.928542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.928912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.928922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.929189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.929200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.929413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.929425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.929801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.929812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.930184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.930196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.930574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.930585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.930973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.930983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.931347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.931357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.931702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.149 [2024-06-07 16:39:11.931713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.149 qpair failed and we were unable to recover it. 00:30:45.149 [2024-06-07 16:39:11.932101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.932112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.932482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.932493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.932812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.932823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.933105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.933116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.933486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.933497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.933877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.933889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.934273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.934284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.934672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.934686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.935051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.935062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.935422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.935433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.935831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.935842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.936206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.936217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.936594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.936605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.936973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.936985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.937349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.937360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.937683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.937693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.938083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.938094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.938461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.938472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.938857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.938868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.939266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.939277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.939614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.939625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.939981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.939992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.940348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.940360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.940721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.940731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.941115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.941126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.941495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.941506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.941888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.941898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.942278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.942288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.942659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.942671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.943037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.943048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.943414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.943424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.943707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.943718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.944093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.944103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.944465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.944475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.944864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.944877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.945277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.150 [2024-06-07 16:39:11.945287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.150 qpair failed and we were unable to recover it. 00:30:45.150 [2024-06-07 16:39:11.945676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.945687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.946045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.946056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.946414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.946425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.946810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.946820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.947218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.947229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.947594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.947605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.947977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.947988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.948341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.948351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.948715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.948726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.949071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.949082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.949459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.949469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.949840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.949851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.950200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.950211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.950558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.950570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.950937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.950948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.951182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.951194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.951431] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.951442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.951793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.951804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.952061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.952071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.952437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.952448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.952821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.952832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.953197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.953208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.953422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.953435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.953797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.953808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.954192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.954203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.954569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.954580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.954785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.954798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.955053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.955063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.955428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.955440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.955835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.955846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.956137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.956148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.956386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.956397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.956810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.956821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.957185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.957195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.957561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.957572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.957939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.957949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.958333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.151 [2024-06-07 16:39:11.958343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.151 qpair failed and we were unable to recover it. 00:30:45.151 [2024-06-07 16:39:11.958709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.958720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.959086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.959096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.959464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.959475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.959845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.959855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.960142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.960152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.960525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.960535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.960903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.960914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.961305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.961316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.961677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.961688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.961918] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.961929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.962200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.962211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.962571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.962582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.962936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.962946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.963337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.963348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.963736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.963747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.964138] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.964148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.964519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.964530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.964870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.964880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.965229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.965240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.965595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.965606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.966005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.966015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.966381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.966392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.152 [2024-06-07 16:39:11.966770] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.152 [2024-06-07 16:39:11.966780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.152 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.967170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.967182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.967580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.967592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.968055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.968066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.968425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.968436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.968815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.968826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.969183] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.969193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.969513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.433 [2024-06-07 16:39:11.969527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.433 qpair failed and we were unable to recover it. 00:30:45.433 [2024-06-07 16:39:11.969859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.969870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.970257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.970268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.970655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.970665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.971030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.971040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.971411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.971422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.971765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.971776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.972159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.972170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.972600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.972638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.973012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.973025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.973394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.973413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.973673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.973684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.974056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.974067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.974437] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.974448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.974705] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.974716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.975142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.975153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.975511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.975521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.975897] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.975907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.976296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.976307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.976691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.976703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.977073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.977084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.977458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.977469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.977774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.977784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.978134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.978144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.978531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.978542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.978927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.978938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.979357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.979367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.979744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.979757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.980124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.980134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.980499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.980510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.980904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.980915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.981312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.981323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.981699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.981709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.982094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.982105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.982492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.982503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.982888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.982899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.983132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.983143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.434 qpair failed and we were unable to recover it. 00:30:45.434 [2024-06-07 16:39:11.983469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.434 [2024-06-07 16:39:11.983480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.983852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.983862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.984255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.984265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.984654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.984665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.984992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.985003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.985235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.985246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.985612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.985624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.985992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.986003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.986370] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.986382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.986812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.986824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.987189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.987199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.987533] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.987544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.987920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.987931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.988287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.988297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.988578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.988589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.988956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.988967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.989335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.989345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.989595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.989608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.989818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.989828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.990167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.990178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.990569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.990579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.990974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.990984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.991192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.991203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.991546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.991557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.991936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.991947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.992319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.992330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.992563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.992574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.992950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.992960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.993325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.993336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.993695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.993706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.994080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.994090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.994456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.994468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.994833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.994844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.995137] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.995147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.995376] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.995388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.995740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.435 [2024-06-07 16:39:11.995751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.435 qpair failed and we were unable to recover it. 00:30:45.435 [2024-06-07 16:39:11.996118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.996129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.996514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.996526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.996949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.996960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.997323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.997334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.997698] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.997709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.998129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.998139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.998424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.998435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.998793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.998804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.999170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.999181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.999527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.999538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:11.999875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:11.999885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.000252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.000263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.000626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.000637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.001021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.001033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.001395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.001416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.001762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.001772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.002177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.002188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.002689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.002728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.003106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.003119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.003490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.003502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.003892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.003903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.004294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.004304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.004686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.004698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.005065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.005075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.005423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.005433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.005670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.005681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.006096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.006106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.006409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.006421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.006811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.006822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.007210] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.007221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.007582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.007593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.007941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.007952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.008317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.008327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.008702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.008714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.009078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.009091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.009458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.009470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.009858] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.009868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.010237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.436 [2024-06-07 16:39:12.010247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.436 qpair failed and we were unable to recover it. 00:30:45.436 [2024-06-07 16:39:12.010615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.010626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.011017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.011027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.011464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.011475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.011818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.011828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.012198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.012208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.012578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.012589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.012954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.012965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.013332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.013343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.013713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.013724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.014170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.014180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.014458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.014469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.014804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.014816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.015187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.015197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.015573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.015584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.015956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.015966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.016302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.016312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.016577] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.016589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.016882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.016892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.017260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.017270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.017634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.017644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.018010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.018021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.018379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.018390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.018664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.018675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.019062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.019072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.019432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.019442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.019786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.019797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.020163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.020173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.020566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.020578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.020943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.020955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.021310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.021320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.021677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.021688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.022071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.022081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.022338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.022349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.022721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.022731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.023102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.023112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.023497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.023508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.437 [2024-06-07 16:39:12.023873] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.437 [2024-06-07 16:39:12.023883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.437 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.024248] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.024258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.024634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.024647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.025037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.025047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.025412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.025424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.025831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.025841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.026187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.026197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.026691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.026730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.027101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.027115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.027511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.027523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.027754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.027765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.028152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.028162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.028529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.028540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.028930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.028940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.029315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.029325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.029683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.029693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.030066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.030077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.030349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.030360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.030730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.030740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.031129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.031140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.031509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.031520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.031916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.031926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.032289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.032299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.032672] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.032683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.032916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.032926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.033286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.033297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.033617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.033627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.033995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.034005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.034375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.034386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.034752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.034764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.035132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.035142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.438 [2024-06-07 16:39:12.035530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.438 [2024-06-07 16:39:12.035541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.438 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.035919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.035929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.036296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.036307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.036659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.036670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.037062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.037073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.037444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.037454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.037849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.037860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.038228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.038239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.038470] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.038482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.038867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.038878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.039242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.039252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.039617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.039628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.039888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.039900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.040260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.040271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.040637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.040649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.041018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.041028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.041422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.041433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.041787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.041797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.042099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.042109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.042479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.042490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.042840] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.042850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.043223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.043234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.043600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.043611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.043992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.044003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.044388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.044399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.044760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.044770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.045140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.045151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.045556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.045567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.045779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.045790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.046116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.046127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.046455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.046466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.046795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.046805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.047188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.047199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.047566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.047577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.048003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.048013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.048378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.048388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.439 [2024-06-07 16:39:12.048752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.439 [2024-06-07 16:39:12.048763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.439 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.049129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.049140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.049505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.049516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.049890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.049902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.050279] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.050290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.050684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.050695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.051080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.051091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.051459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.051470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.051940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.051950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.052312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.052322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.052694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.052705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.053071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.053081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.053459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.053470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.053837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.053848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.054213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.054223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.054585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.054596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.055013] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.055023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.055333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.055344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.055717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.055728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.056093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.056104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.056490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.056501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.056869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.056879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.057246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.057256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.057465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.057479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.057790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.057800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.058176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.058186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.058349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.058361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.058692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.058703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.059088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.059098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.059475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.059486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.059871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.059884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.060252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.060262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.060724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.060734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.061007] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.061017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.440 [2024-06-07 16:39:12.061382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.440 [2024-06-07 16:39:12.061392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.440 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.061664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.061676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.062066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.062077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.062445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.062455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.062797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.062807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.063168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.063178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.063568] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.063579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.063944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.063954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.064320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.064331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.064701] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.064711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.065118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.065129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.065504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.065515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.065890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.065901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.066268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.066279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.066644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.066655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.067019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.067030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.067399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.067414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.067706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.067716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.068095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.068105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.068475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.068486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.068831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.068842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.069107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.069118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.069505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.069516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.069791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.069803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.070245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.070256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.070613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.070624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.071014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.071024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.071390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.071404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.071640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.071650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.071998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.072008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.072398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.072413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.072766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.072777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.073141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.073152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.073519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.073530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.073875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.073885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.074186] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.074197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.074570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.074581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.074898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.441 [2024-06-07 16:39:12.074909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.441 qpair failed and we were unable to recover it. 00:30:45.441 [2024-06-07 16:39:12.075323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.075334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.075709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.075720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.076090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.076100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.076466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.076476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.076832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.076843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.077211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.077221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.077601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.077612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.077983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.077994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.078413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.078424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.078778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.078790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.079163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.079174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.079541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.079553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.079850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.079860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.080235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.080245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.080477] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.080487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.080816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.080826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.081257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.081267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.081611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.081622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.081988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.081999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.082250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.082260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.082652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.082663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.082990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.083000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.083383] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.083393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.083748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.083759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.084145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.084155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.084361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.084373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.084652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.084665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.085034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.085044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.085430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.085440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.085841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.085851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.086220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.086230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.086598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.086609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.086992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.087002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.087306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.087316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.087695] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.087706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.088072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.088084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.442 [2024-06-07 16:39:12.088479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.442 [2024-06-07 16:39:12.088490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.442 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.088874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.088885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.089249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.089259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.089629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.089640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.090027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.090038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.090408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.090419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.090763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.090774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.091145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.091155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.091659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.091697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.092071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.092085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.092425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.092437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.092864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.092875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.093133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.093143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.093512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.093523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.093899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.093910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.094278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.094289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.094669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.094680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.095048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.095062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.095427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.095438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.095805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.095816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.096207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.096218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.096659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.096671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.097029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.097039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.097418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.097429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.097800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.097811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.098182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.098192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.098572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.098583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.098925] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.098935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.099312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.099323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.099736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.099747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.443 qpair failed and we were unable to recover it. 00:30:45.443 [2024-06-07 16:39:12.100111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.443 [2024-06-07 16:39:12.100122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.100392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.100407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.100724] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.100734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.101093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.101103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.101484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.101495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.101865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.101876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.102264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.102274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.102655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.102667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.103034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.103044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.103413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.103424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.103707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.103717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.104034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.104045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.104409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.104419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.104768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.104778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.105121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.105134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.105493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.105505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.105882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.105893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.106252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.106262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.106525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.106537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.106913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.106924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.107289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.107300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.107680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.107690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.108074] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.108084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.108443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.108454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.108798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.108808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.109168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.109178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.109389] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.109409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.109783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.109793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.110159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.110170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.110528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.110538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.110899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.110910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.111275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.111286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.111680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.111691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.112056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.112066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.112342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.112353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.112661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.112672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.113051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.113061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.113467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.113478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.444 qpair failed and we were unable to recover it. 00:30:45.444 [2024-06-07 16:39:12.113805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.444 [2024-06-07 16:39:12.113815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.114167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.114177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.114534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.114544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.114901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.114912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.115260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.115271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.115649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.115660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.116028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.116038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.116411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.116422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.116712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.116723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.117060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.117070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.117307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.117318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.117664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.117675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.118063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.118073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.118494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.118505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.118759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.118770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.119005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.119016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.119413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.119423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.119665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.119675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.120038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.120049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.120409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.120420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.120766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.120776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.121136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.121146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.121506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.121517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.121870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.121881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.122275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.122285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.122650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.122661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.123043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.123055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.123290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.123301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.123675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.123686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.123951] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.123962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.124324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.124334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.124708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.124720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.125038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.125049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.125419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.125430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.125710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.125721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.126104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.126114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.126511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.126522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.445 qpair failed and we were unable to recover it. 00:30:45.445 [2024-06-07 16:39:12.126746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.445 [2024-06-07 16:39:12.126757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.127131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.127142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.127453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.127464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.127849] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.127860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.128218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.128229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.128587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.128598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.128957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.128967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.129315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.129328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.129696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.129707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.130037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.130048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.130428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.130439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.130799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.130809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.131207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.131218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.131591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.131601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.131973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.131983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.132367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.132378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.132769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.132780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.133135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.133146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.133452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.133463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.133765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.133777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.134031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.134042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.134413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.134424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.134790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.134800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.135184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.135194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.135558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.135569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.135942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.135952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.136317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.136327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.136702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.136713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.137078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.137088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.137457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.137468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.137834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.137846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.138214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.138225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.138592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.138602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.138972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.138983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.139325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.139337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.139708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.139719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.140078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.140088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.140452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.446 [2024-06-07 16:39:12.140463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.446 qpair failed and we were unable to recover it. 00:30:45.446 [2024-06-07 16:39:12.140836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.140847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.141235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.141245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.141611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.141622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.141990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.142001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.142388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.142398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.142779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.142789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.143154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.143164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.143527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.143538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.143903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.143914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.144303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.144314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.144687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.144698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.145069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.145079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.145444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.145455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.145813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.145824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.146094] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.146105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.146457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.146468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3300132 Killed "${NVMF_APP[@]}" "$@" 00:30:45.447 [2024-06-07 16:39:12.146855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.146866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.147251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.147262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:45.447 [2024-06-07 16:39:12.147658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.147669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:45.447 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:45.447 [2024-06-07 16:39:12.148010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.148021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:45.447 [2024-06-07 16:39:12.148439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.148450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.148860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.148871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.149239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.149250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.149642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.149653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.150012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.150022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.150411] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.150422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.150735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.150745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.151143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.151154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.151517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.151528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.151956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.151967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.152334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.152345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.152721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.152732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.153055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.153067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.153446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.153457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.153908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.153919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.447 qpair failed and we were unable to recover it. 00:30:45.447 [2024-06-07 16:39:12.154286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.447 [2024-06-07 16:39:12.154297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.154676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.154687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.154938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.154949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.155257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.155268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.155626] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.155637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3301626 00:30:45.448 [2024-06-07 16:39:12.155996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.156008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3301626 00:30:45.448 [2024-06-07 16:39:12.156363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.156375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3301626 ']' 00:30:45.448 [2024-06-07 16:39:12.156559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.156571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.448 [2024-06-07 16:39:12.156935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.156946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.448 [2024-06-07 16:39:12.157317] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.157331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:45.448 [2024-06-07 16:39:12.157702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:45.448 [2024-06-07 16:39:12.157714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.157975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.157987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.158354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.158365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.158763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.158774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.158983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.158996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.159345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.159356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.159735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.159747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.160116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.160127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.160494] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.160506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.160892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.160903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.161273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.161285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.161646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.161657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.162020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.162032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.162419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.162430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.162832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.162843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.163218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.163229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.163618] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.163630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.448 qpair failed and we were unable to recover it. 00:30:45.448 [2024-06-07 16:39:12.163974] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.448 [2024-06-07 16:39:12.163985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.164208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.164220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.164479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.164491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.164846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.164857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.165231] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.165243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.165537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.165548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.165919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.165931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.166241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.166252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.166489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.166501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.166846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.166857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.167125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.167136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.167502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.167514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.167905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.167917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.168292] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.168303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.168709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.168721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.169085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.169097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.169450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.169462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.169796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.169807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.170170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.170181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.170475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.170486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.170960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.170971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.171340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.171352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.171739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.171752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.172121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.172133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.172529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.172541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.172819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.172830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.173196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.173207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.173575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.173585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.173923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.173934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.174206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.174218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.174500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.174511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.174727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.174737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.175133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.175143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.175519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.175530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.175817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.175827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.176191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.176202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.176594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.176605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.176863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.176873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.449 qpair failed and we were unable to recover it. 00:30:45.449 [2024-06-07 16:39:12.177251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.449 [2024-06-07 16:39:12.177262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.177629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.177640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.177729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.177740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.178090] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.178120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.178676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.178705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.178972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.178981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.179263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.179270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.179647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.179676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.180058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.180068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.180294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.180302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.180661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.180669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.181002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.181013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.181390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.181398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.181650] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.181658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.182045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.182054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.182435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.182443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.182823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.182830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.183240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.183248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.183640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.183648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.184011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.184019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.184232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.184241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.184586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.184595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.184992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.185000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.185386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.185394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.185774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.185782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.186061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.186069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.186458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.186466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.186901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.186909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.187237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.187245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.187483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.187491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.187775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.187782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.188003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.188011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.188261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.188269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.450 [2024-06-07 16:39:12.188548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.450 [2024-06-07 16:39:12.188558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.450 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.188975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.188983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.189170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.189178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.189534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.189542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.189922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.189930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.190274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.190282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.190577] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.190586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.191029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.191036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.191386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.191394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.191772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.191780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.192154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.192162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.192656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.192684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.193076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.193085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.193493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.193501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.193769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.193779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.194179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.194187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.194469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.194477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.194830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.194838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.195219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.195231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.195463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.195471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.195712] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.195719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.196122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.196130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.196357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.196364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.196738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.196746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.197119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.197126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.197519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.197526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.197900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.197908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.198325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.198333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.198590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.198598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.198871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.198879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.198960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.198968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.199384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.199392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.199627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.199635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.200011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.200019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.200419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.200427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.200781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.200788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.201164] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.201172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.201552] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.451 [2024-06-07 16:39:12.201560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.451 qpair failed and we were unable to recover it. 00:30:45.451 [2024-06-07 16:39:12.201927] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.201935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.202312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.202319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.202564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.202571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.202969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.202976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.203381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.203389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.203758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.203766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.204139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.204146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.204523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.204531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.204871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.204879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.205081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.205089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.205280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.205289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.205647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.205655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.205895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.205903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.206278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.206285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.206686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.206694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.207029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.207036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.207136] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:45.452 [2024-06-07 16:39:12.207179] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.452 [2024-06-07 16:39:12.207361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.207369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.207654] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.207661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.208051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.208059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.208433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.208441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.208703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.208711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.209089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.209097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.209418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.209427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.209774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.209783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.210181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.210189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.210567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.210575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.210963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.210971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.211209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.211217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.211637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.211646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.212021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.212030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.212409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.212418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.212785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.212794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.452 [2024-06-07 16:39:12.213121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.452 [2024-06-07 16:39:12.213129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.452 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.213462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.213471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.213848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.213856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.214275] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.214284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.214693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.214701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.215072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.215080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.215454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.215463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.215741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.215749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.216146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.216155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.216508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.216517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.216945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.216953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.217182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.217191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.217427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.217436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.217818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.217826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.218202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.218211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.218587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.218595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.218995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.219003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.219380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.219389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.219758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.219767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.220139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.220148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.220434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.220443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.220813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.220821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.221223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.221232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.221605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.221614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.222012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.222020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.222394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.222405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.222791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.222799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.223174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.223184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.223630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.223659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.223859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.223869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.224223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.224231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.224428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.224437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.224881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.453 [2024-06-07 16:39:12.224889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.453 qpair failed and we were unable to recover it. 00:30:45.453 [2024-06-07 16:39:12.225263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.225270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.225637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.225645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.226022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.226030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.226361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.226369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.226757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.226765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.227140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.227148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.227524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.227531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.227943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.227953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.228163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.228172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.228418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.228426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.228812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.228820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.229179] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.229188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.229558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.229566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.229857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.229865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.230238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.230246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.230608] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.230616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.230831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.230839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.231066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.231073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.231444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.231452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.231794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.231802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.232169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.232177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.232531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.232539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.232783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.232791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.233181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.233188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.233585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.233593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.233972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.233980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.234350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.234358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.234699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.234708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.235084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.235092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.235462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.235471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.235675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.235683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.236022] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.236029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.236225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.236233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.236487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.236495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.236850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.236859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.237229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.237237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.237305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.454 [2024-06-07 16:39:12.237311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.454 qpair failed and we were unable to recover it. 00:30:45.454 [2024-06-07 16:39:12.237659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.237667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.238035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.238043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.238299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.238306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.238674] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.455 [2024-06-07 16:39:12.238682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.239056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.239065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.239436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.239443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.239786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.239793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.240184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.240191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.240573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.240581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.240949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.240957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.241291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.241300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.241720] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.241728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.241921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.241929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.242273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.242281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.242679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.242687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.243085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.243093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.243467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.243475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.243758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.243765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.244134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.244142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.244499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.244507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.244851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.244859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.245225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.245233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.245428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.245437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.245613] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.245622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.245937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.245946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.246312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.246319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.246490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.246499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.246855] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.246863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.247230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.247238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.247621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.247629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.247996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.248004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.248390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.248398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.248558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.248567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.248883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.248890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.249257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.249265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.455 [2024-06-07 16:39:12.249632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.455 [2024-06-07 16:39:12.249640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.455 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.250035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.250043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.250410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.250418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.250795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.250804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.251193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.251201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.251567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.251575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.251937] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.251945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.252301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.252309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.252681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.252689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.253060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.253068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.253444] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.253452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.253727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.253735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.254153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.254161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.254534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.254542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.254923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.254931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.255208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.255217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.255592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.255600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.255979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.255987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.256359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.256367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.256740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.256748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.256952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.256961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.257142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.257150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.257536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.257544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.257917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.257925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.258319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.258326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.258686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.258694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.259064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.259073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.259445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.259453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.259850] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.259858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.260238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.260246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.260619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.260627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.260863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.260870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.261262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.261269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.261680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.261688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.261995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.262003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.262375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.262383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.262580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.262589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.262870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.262877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.456 [2024-06-07 16:39:12.263255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.456 [2024-06-07 16:39:12.263262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.456 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.263549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.263557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.263956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.263964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.264281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.264288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.264562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.264570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.264956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.264964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.265224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.265231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.265604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.265612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.265827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.265834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.266196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.266205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.266580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.266587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.266814] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.266822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.267172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.267180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.267553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.267561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.267921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.267928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.268375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.268383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.268758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.268766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.269136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.269145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.269539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.269547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.269706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.269715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.270019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.270027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.270233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.270241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.270622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.270630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.270783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.270791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.270996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.271003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.271330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.271337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.271710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.271719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.272104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.272113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.272480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.272488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.272863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.272871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.457 qpair failed and we were unable to recover it. 00:30:45.457 [2024-06-07 16:39:12.273257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.457 [2024-06-07 16:39:12.273265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.458 qpair failed and we were unable to recover it. 00:30:45.458 [2024-06-07 16:39:12.273487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.458 [2024-06-07 16:39:12.273495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.458 qpair failed and we were unable to recover it. 00:30:45.458 [2024-06-07 16:39:12.273833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.458 [2024-06-07 16:39:12.273840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.458 qpair failed and we were unable to recover it. 00:30:45.458 [2024-06-07 16:39:12.274115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.458 [2024-06-07 16:39:12.274123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.458 qpair failed and we were unable to recover it. 00:30:45.458 [2024-06-07 16:39:12.274503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.458 [2024-06-07 16:39:12.274511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.458 qpair failed and we were unable to recover it. 00:30:45.458 [2024-06-07 16:39:12.274878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.458 [2024-06-07 16:39:12.274886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.458 qpair failed and we were unable to recover it. 00:30:45.730 [2024-06-07 16:39:12.275253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.730 [2024-06-07 16:39:12.275262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-06-07 16:39:12.275628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.730 [2024-06-07 16:39:12.275636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.730 qpair failed and we were unable to recover it. 00:30:45.730 [2024-06-07 16:39:12.276024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.276034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.276405] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.276414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.276783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.276792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.277242] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.277250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.277639] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.277648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.278017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.278025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.278395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.278405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.278853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.278881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.279273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.279282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.279842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.279871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.280251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.280261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.280737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.280766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.281117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.281127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.281591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.281620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.281997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.282007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.282378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.282386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.282725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.282734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.283109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.283117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.283619] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.283648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.284027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.284040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.284432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.284441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.284832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.284841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.285080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.285087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.285465] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.285473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.285714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.285722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.286081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.286089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.286456] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.286464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.286785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.286793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.287173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.287180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.287553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.287560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.287939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.287947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.288303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.288310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.288510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.288518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.288914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.288922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.289289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.289297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.731 qpair failed and we were unable to recover it. 00:30:45.731 [2024-06-07 16:39:12.289513] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.731 [2024-06-07 16:39:12.289521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.289762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.289770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.290035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.290042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.290425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.290433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.290774] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.290782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.290981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.290989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.291327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.291335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.291704] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.291713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.292080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.292088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.292481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.292489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.292865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.292872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.293244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.293252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.293648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.293656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.293694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.732 [2024-06-07 16:39:12.294055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.294063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.294435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.294442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.294795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.294802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.295148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.295157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.295548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.295557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.295938] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.295946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.296350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.296358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.296717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.296725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.297077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.297085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.297448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.297456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.297661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.297670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.298033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.298042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.298274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.298282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.298692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.298700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.298929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.298937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.299309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.299317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.299680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.299688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.300054] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.300062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.300297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.300304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.300647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.300656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.301050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.301059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.301514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.301522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.301941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.301949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.302146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.302154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.732 [2024-06-07 16:39:12.302501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.732 [2024-06-07 16:39:12.302509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.732 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.302791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.302798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.303072] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.303080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.303551] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.303559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.303845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.303853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.304228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.304237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.304622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.304630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.304868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.304876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.305270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.305278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.305487] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.305496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.305899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.305907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.306280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.306288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.306653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.306662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.307041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.307050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.307435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.307444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.307812] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.307820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.308131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.308139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.308412] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.308421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.308842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.308850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.309178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.309186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.309454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.309463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.309833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.309841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.310206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.310213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.310575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.310584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.310950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.310957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.311320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.311328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.311666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.311674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.312037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.312047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.312435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.312443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.312820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.312828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.313198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.313207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.313426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.313434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.313796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.313804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.314171] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.314178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.314520] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.314528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.314752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.314760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.315018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.315025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.315216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.315225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.733 [2024-06-07 16:39:12.315495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.733 [2024-06-07 16:39:12.315504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.733 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.315880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.315888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.316269] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.316276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.316548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.316555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.316887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.316895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.317335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.317343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.317697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.317704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.318070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.318078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.318445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.318453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.318789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.318797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.319146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.319153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.319517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.319525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.319900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.319908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.320277] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.320285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.320647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.320655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.321021] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.321030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.321391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.321399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.321627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.321635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.321811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.321818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.322194] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.322202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.322464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.322472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.322777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.322785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.323160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.323168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.323410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.323419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.323846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.323853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.324211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.324219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.324665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.324673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.325032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.325041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.325404] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.325414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.325790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.325800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.326196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.326205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.326690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.326722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.326983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.326993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.327363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.327371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.327718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.327727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.328101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.328110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.328586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.328615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.328994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.329004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.734 [2024-06-07 16:39:12.329392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.734 [2024-06-07 16:39:12.329400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.734 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.329805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.329814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.330086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.330094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.330616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.330645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.331036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.331045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.331424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.331433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.331817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.331825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.332191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.332199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.332575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.332583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.332823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.332832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.333219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.333227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.333421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.333429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.333760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.333767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.334117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.334125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.334529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.334538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.334762] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.334769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.335132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.335140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.335506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.335514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.335877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.335885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.336202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.336210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.336506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.336514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.336899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.336907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.337226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.337234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.337615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.337623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.338011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.338018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.338390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.338399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.338848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.338856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.339221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.339229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.339709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.339737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.340093] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.340103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.340328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.735 [2024-06-07 16:39:12.340336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.735 qpair failed and we were unable to recover it. 00:30:45.735 [2024-06-07 16:39:12.340524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.340535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.340896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.340904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.341280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.341288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.341660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.341668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.342031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.342038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.342386] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.342394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.342764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.342772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.343136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.343143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.343512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.343520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.343903] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.343911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.344270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.344279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.344660] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.344667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.345028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.345035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.345294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.345302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.345506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.345517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.345889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.345897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.346261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.346269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.346627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.346635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.346879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.346887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.347165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.347173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.347530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.347538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.347913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.347921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.348288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.348295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.348529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.348536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.348924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.348932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.349351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.349359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.349723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.349731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.350096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.350105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.350469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.350477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.350831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.350839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.351205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.351212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.351576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.351584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.352028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.352035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.352379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.352387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.352779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.352787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.353149] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.353157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.353521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.353530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.353913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.353922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.736 qpair failed and we were unable to recover it. 00:30:45.736 [2024-06-07 16:39:12.354291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.736 [2024-06-07 16:39:12.354299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.354689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.354697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.355066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.355075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.355418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.355426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.355801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.355808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.356178] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.356186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.356575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.356582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.356985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.356993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.357353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.357361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.357716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.357724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.358079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.358087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.358357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.358365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.358605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.358613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.358983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.358991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.359358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.359365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.359458] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.737 [2024-06-07 16:39:12.359483] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.737 [2024-06-07 16:39:12.359494] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.737 [2024-06-07 16:39:12.359500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.737 [2024-06-07 16:39:12.359506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.737 [2024-06-07 16:39:12.359650] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:30:45.737 [2024-06-07 16:39:12.359755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.359770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.359801] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:30:45.737 [2024-06-07 16:39:12.359943] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:30:45.737 [2024-06-07 16:39:12.359944] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:30:45.737 [2024-06-07 16:39:12.360226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.360234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.360604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.360612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.360802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.360811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.361176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.361184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.361541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.361549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.361921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.361929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.362297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.362305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.362669] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.362677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.363049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.363056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.363496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.363504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.363879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.363887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.364263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.364272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.364539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.364547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.364917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.364925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.365131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.365139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.365507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.365515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.365801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.365809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.366174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.366182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.366490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.366498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.737 [2024-06-07 16:39:12.366878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.737 [2024-06-07 16:39:12.366886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.737 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.367152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.367160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.367527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.367535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.367884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.367892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.368148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.368157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.368525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.368533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.368912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.368920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.369286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.369293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.369688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.369696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.370065] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.370074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.370440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.370448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.370890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.370898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.371249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.371256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.371625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.371633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.371883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.371892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.372133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.372141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.372528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.372537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.372743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.372753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.372928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.372937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.373114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.373123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.373488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.373497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.373865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.373874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.374130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.374138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.374506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.374514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.374735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.374743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.374991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.374999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.375354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.375362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.375721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.375730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.376071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.376079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.376445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.376453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.376827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.376835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.377103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.377111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.377501] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.377510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.377898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.377906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.378309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.378318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.378540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.378549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.378944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.378952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.738 qpair failed and we were unable to recover it. 00:30:45.738 [2024-06-07 16:39:12.379188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.738 [2024-06-07 16:39:12.379196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.379498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.379507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.379640] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.379647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.380010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.380018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.380214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.380222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.380549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.380557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.380916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.380925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.381363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.381372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.381624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.381632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.382042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.382050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.382410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.382419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.382769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.382777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.383148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.383157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.383522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.383531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.383905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.383914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.384301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.384310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.384696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.384705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.385075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.385083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.385457] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.385466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.385830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.385839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.386083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.386092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.386463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.386471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.386831] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.386840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.387187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.387195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.387560] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.387569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.387783] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.387791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.388012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.388020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.388373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.388380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.388468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.388475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.388739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.388747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.389110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.389118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.389486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.389494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.389896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.389904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.390255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.390263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.390536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.390544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.390921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.390930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.391257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.391265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.391628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.391637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.739 [2024-06-07 16:39:12.392005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.739 [2024-06-07 16:39:12.392013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.739 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.392384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.392392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.392782] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.392790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.393156] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.393164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.393527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.393536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.393758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.393766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.394140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.394148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.394537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.394546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.394916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.394924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.395287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.395297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.395677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.395685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.396061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.396069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.396440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.396448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.396702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.396711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.396993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.397002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.397232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.397241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.397448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.397457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.397769] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.397776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.398126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.398134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.398407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.398415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.398635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.398643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.399009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.399017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.399408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.399416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.399755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.399763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.400125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.400132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.400492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.400500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.400890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.400898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.401152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.401160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.401571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.401579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.401772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.401781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.402131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.402139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.402507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.402516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.402966] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.740 [2024-06-07 16:39:12.402974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.740 qpair failed and we were unable to recover it. 00:30:45.740 [2024-06-07 16:39:12.403238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.403245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.403475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.403483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.403857] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.403864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.404101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.404109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.404476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.404484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.404865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.404872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.405243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.405251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.405629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.405637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.406016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.406023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.406413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.406421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.406688] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.406696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.406924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.406932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.407160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.407168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.407559] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.407567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.407978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.407986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.408357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.408365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.408557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.408567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.408917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.408925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.409288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.409296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.409569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.409577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.409945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.409952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.410113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.410121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.410505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.410513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.410878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.410886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.411253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.411261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.411649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.411657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.412033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.412041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.412281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.412290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.412643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.412651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.413035] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.413043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.413428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.413436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.413815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.741 [2024-06-07 16:39:12.413823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.741 qpair failed and we were unable to recover it. 00:30:45.741 [2024-06-07 16:39:12.414033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.414041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.414316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.414323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.414541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.414550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.414917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.414924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.415092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.415100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.415492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.415500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.415874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.415881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.416257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.416264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.416628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.416636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.417034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.417042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.417356] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.417364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.417427] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.417433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.417737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.417745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.418118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.418126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.418358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.418367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.418728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.418735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.419097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.419105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.419476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.419483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.419880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.419888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.420262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.420270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.420642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.420650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.420860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.420867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.421082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.421090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.421464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.421472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.421859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.421869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.422234] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.422242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.422606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.422614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.422975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.422983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.423353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.423360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.423738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.423746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.424135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.424142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.424511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.424520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.424908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.424916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.425176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.425184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.425328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.742 [2024-06-07 16:39:12.425336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.742 qpair failed and we were unable to recover it. 00:30:45.742 [2024-06-07 16:39:12.425657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.425665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.426075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.426083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.426453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.426461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.426830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.426837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.427198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.427206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.427449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.427457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.427829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.427836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.428230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.428239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.428741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.428772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.429150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.429160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.429398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.429413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.429794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.429802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.430173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.430181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.430753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.430783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.431166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.431176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.431668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.431696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.432087] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.432096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.432583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.432611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.433037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.433046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.433445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.433454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.433827] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.433834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.434113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.434121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.434490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.434498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.434889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.434897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.435264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.435272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.435630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.435637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.436005] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.436013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.436409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.436417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.436787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.436795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.437165] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.437176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.437260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.437266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.437512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.437520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.437703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.437711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.438089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.438097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.438463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.743 [2024-06-07 16:39:12.438471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.743 qpair failed and we were unable to recover it. 00:30:45.743 [2024-06-07 16:39:12.438838] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.438845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.439239] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.439246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.439604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.439612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.439975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.439983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.440347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.440355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.440756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.440764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.440961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.440968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.441294] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.441302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.441693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.441701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.441901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.441912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.442291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.442299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.442584] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.442593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.442957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.442965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.443368] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.443376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.443750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.443758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.444131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.444138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.444512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.444520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.444808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.444815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.445181] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.445188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.445555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.445563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.445930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.445938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.446327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.446336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.446731] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.446739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.447114] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.447122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.447499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.447506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.447901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.447908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.448276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.448284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.448681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.448689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.448958] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.448966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.449120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.449129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.449325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.449332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.449730] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.744 [2024-06-07 16:39:12.449739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.744 qpair failed and we were unable to recover it. 00:30:45.744 [2024-06-07 16:39:12.450104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.450112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.450511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.450519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.450895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.450904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.451281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.451289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.451616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.451624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.451824] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.451831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.452118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.452126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.452500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.452509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.452876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.452883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.453089] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.453096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.453340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.453348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.453735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.453743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.453940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.453949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.454306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.454314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.454673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.454680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.455048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.455056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.455417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.455426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.455758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.455765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.455975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.455982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.456336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.456344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.456741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.456748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.457143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.457151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.457519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.457527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.457614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.457621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.457847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.457854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.745 qpair failed and we were unable to recover it. 00:30:45.745 [2024-06-07 16:39:12.458223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.745 [2024-06-07 16:39:12.458231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.458605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.458613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.458989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.458997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.459364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.459372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.459744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.459752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.460139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.460147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.460521] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.460528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.460913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.460921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.461132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.461140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.461492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.461500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.461884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.461892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.462263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.462270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.462418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.462426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.462755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.462762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.463130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.463139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.463503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.463511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.463896] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.463904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.464303] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.464312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.464594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.464603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.464833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.464841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.465020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.465028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.465257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.465265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.465458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.465466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.465647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.465655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.465852] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.465859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.466228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.466235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.466604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.466611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.466817] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.466826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.467066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.467074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.467296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.467304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.467677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.467686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.468057] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.468064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.468274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.468282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.468661] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.468669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.469058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.469066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.746 qpair failed and we were unable to recover it. 00:30:45.746 [2024-06-07 16:39:12.469434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.746 [2024-06-07 16:39:12.469442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.469692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.469700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.470066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.470074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.470426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.470434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.470846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.470854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.471224] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.471232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.471602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.471610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.471964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.471971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.472343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.472350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.472706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.472715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.473081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.473089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.473486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.473495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.473862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.473870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.474064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.474073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.474448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.474456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.474696] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.474705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.475073] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.475081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.475449] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.475457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.475692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.475700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.475872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.475881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.476243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.476251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.476611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.476619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.477059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.477068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.477417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.477425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.477765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.477772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.478142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.478151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.478373] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.478381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.478451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.478460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.478811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.478818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.479244] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.479252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.479615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.479623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.479994] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.480002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.480348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.480355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.480800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.480808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.481170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.481178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.481448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.481456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.481820] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.481828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.747 qpair failed and we were unable to recover it. 00:30:45.747 [2024-06-07 16:39:12.482024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.747 [2024-06-07 16:39:12.482033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.482363] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.482370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.482716] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.482724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.483102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.483110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.483429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.483436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.483778] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.483786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.484196] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.484204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.484598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.484606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.484963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.484970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.485167] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.485175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.485496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.485504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.485904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.485912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.486118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.486125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.486327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.486334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.486677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.486685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.487071] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.487078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.487442] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.487450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.487807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.487815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.488187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.488195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.488586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.488594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.488964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.488972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.489342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.489350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.489722] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.489730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.490121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.490129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.490498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.490506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.490872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.490881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.491245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.491253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.491450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.491458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.491847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.491855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.748 qpair failed and we were unable to recover it. 00:30:45.748 [2024-06-07 16:39:12.492049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.748 [2024-06-07 16:39:12.492057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.492390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.492397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.492789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.492797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.493172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.493180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.493595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.493603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.494047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.494055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.494407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.494415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.494785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.494793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.494990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.494998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.495255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.495263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.495636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.495644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.496012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.496019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.496388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.496395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.496763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.496771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.497160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.497169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.497636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.497665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.497863] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.497873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.498251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.498259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.498459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.498467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.498666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.498676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.498936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.498943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.499312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.499320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.499690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.499698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.500107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.500115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.500472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.500481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.500848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.500856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.501220] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.501228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.501497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.501505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.501888] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.501897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.502267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.502275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.502636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.502644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.502999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.503006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.503374] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.503381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.503756] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.503764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.504131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.504139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.504526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.504534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.504748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.504759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.505139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.749 [2024-06-07 16:39:12.505146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.749 qpair failed and we were unable to recover it. 00:30:45.749 [2024-06-07 16:39:12.505512] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.505520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.505913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.505920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.506152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.506160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.506432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.506440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.506637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.506645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.506835] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.506843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.507228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.507235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.507606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.507614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.507985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.507992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.508381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.508389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.508761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.508769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.509136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.509144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.509350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.509358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.509418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.509426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.509786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.509793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.510162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.510170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.510530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.510538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.510904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.510912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.511299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.511307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.511576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.511584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.511955] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.511962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.512318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.512325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.512702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.512710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.513092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.513100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.513307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.513315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.513702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.513710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.750 qpair failed and we were unable to recover it. 00:30:45.750 [2024-06-07 16:39:12.514097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-07 16:39:12.514105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.514380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.514388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.514749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.514757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.514988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.514995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.515347] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.515355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.515750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.515758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.516128] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.516135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.516504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.516512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.516880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.516888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.517246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.517254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.517616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.517625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.517992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.518001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.518396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.518409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.518757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.518765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.519134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.519142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.519379] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.519386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.519758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.519766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.520135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.520142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.520612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.520640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.520842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.520851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.521159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.521167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.521542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.521551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.521929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.521937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.522393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.522400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.522656] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.522664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.522878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.522886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.523281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.523289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.523486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.523495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.523830] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.523838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.524258] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.524266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.524496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.524506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.524909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.524917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.525123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.525132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.525345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.525353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.525595] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.525603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.525969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.525977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.526371] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.526378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.526745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.526753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.751 qpair failed and we were unable to recover it. 00:30:45.751 [2024-06-07 16:39:12.527125] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.751 [2024-06-07 16:39:12.527133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.527529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.527537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.527928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.527936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.528306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.528315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.528544] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.528553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.528633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.528639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.528992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.529000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.529205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.529214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.529598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.529607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.529986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.529993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.530191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.530199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.530506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.530514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.530733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.530741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.531130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.531138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.531506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.531515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.531883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.531890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.532162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.532170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.532539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.532547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.532911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.532919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.533272] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.533279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.533511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.533519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.533753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.533761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.534152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.534159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.534344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.534352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.534737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.534744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.535111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.535119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.535486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.535493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.535898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.535906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.536225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.536233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.536419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.536428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.536658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.536665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.537080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.537087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.537319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.537328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.537526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.537534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.537868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.537876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.538264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.538272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.538628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.538636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.539002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.539010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.752 [2024-06-07 16:39:12.539377] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.752 [2024-06-07 16:39:12.539385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.752 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.539752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.539760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.540132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.540140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.540507] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.540516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.540878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.540886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.541280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.541288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.541675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.541684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.542043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.542051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.542421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.542429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.542668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.542676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.542908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.542915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.543293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.543300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.543728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.543735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.544085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.544092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.544462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.544469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.544625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.544634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.544985] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.544995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.545419] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.545427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.545741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.545749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.546116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.546123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.546360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.546368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.546729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.546737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.547105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.547113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.547483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.547490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.547859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.547866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.548254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.548262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.548628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.548636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.549003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.549011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.549381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.549388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.549745] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.549754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.550124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.550132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.550497] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.550505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.550887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.550895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.551282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.551290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.551633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.551642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.551877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.551885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.552080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.552088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.552432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.552440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.552819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.753 [2024-06-07 16:39:12.552826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.753 qpair failed and we were unable to recover it. 00:30:45.753 [2024-06-07 16:39:12.553036] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.553044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.553418] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.553426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.553760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.553768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.554135] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.554142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.554484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.554493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.554861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.554869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.555295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.555302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.555653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.555660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.556031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.556039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.556417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.556425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.556797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.556805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.557075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.557083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.557280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.557288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.557540] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.557548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.557922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.557930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.558297] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.558304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.558692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.558700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.558895] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.558905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.559223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.559231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.559614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.559621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.559996] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.560003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.560233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.560240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.560464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.560481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.560821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.560829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.561195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.561203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.561575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.561583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.561815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.561822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.562039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.562047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.562424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.562432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.562628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.562635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.563015] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.563022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.563396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.563406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.563773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.563781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.564174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.564181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.564571] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.564579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.564945] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.564952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.565148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.565156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.565486] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.565494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.754 qpair failed and we were unable to recover it. 00:30:45.754 [2024-06-07 16:39:12.565876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.754 [2024-06-07 16:39:12.565884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.566081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.566089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.566414] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.566422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.566681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.566689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.567083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.567091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.567502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.567510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.567886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.567897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.568267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.568276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.568642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.568650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.569024] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.569032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.569398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.569410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.569753] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.569762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.570108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.570116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.570490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.570498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.570746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.570754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.571122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.571129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.571360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.571367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.571424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.571431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:45.755 [2024-06-07 16:39:12.571655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.755 [2024-06-07 16:39:12.571663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:45.755 qpair failed and we were unable to recover it. 00:30:46.029 [2024-06-07 16:39:12.572029] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.572037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.572233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.572242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.572485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.572493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.572885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.572893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.573262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.573270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.573663] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.573671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.574051] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.574058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.574429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.574438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.574891] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.574899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.575256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.575264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.575728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.575736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.575940] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.575949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.576318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.576325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.576537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.576545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.576944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.576952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.577319] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.577326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.577703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.577712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.578100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.578108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.578428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.578437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.578795] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.578803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.579172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.579179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.579415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.579423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.579481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.579487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.579859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.579867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.580232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.580239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.580436] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.580443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.580808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.580816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.581182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.581192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.581564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.581572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.581935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.581943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.582152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.582159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.582350] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.582359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.582549] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.582557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.582880] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.582887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.583282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.583289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.583500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.583508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.030 qpair failed and we were unable to recover it. 00:30:46.030 [2024-06-07 16:39:12.583844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.030 [2024-06-07 16:39:12.583851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.584245] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.584252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.584643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.584651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.585009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.585017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.585385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.585392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.585808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.585816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.586213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.586221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.586602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.586609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.586977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.586985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.587191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.587199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.587392] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.587404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.587793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.587801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.588168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.588176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.588233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.588240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.588582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.588590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.589014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.589022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.589391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.589399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.589473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.589479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.589828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.589836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.590206] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.590213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.590607] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.590615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.590986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.590994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.591365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.591373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.591761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.591770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.592160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.592168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.592535] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.592543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.592930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.592937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.593148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.593155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.593522] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.593530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.593905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.593913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.594108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.594117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.594299] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.594309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.594744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.594752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.595129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.595136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.595509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.595517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.595900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.595907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.596300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.596308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.596699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.031 [2024-06-07 16:39:12.596708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.031 qpair failed and we were unable to recover it. 00:30:46.031 [2024-06-07 16:39:12.597026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.597034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.597266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.597273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.597706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.597715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.598160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.598168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.598530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.598538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.598803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.598812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.599085] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.599093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.599463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.599471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.599847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.599855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.600323] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.600331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.600703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.600710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.601083] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.601091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.601463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.601471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.601681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.601690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.601862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.601870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.602240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.602248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.602413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.602422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.602803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.602810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.603259] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.603267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.603638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.603646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.604017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.604025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.604410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.604418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.604775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.604783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.605150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.605158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.605517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.605525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.605902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.605909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.606260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.606269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.606482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.606489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.606902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.606909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.607119] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.607126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.607361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.607369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.607603] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.607611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.607971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.607978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.608343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.608352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.608744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.608752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.032 [2024-06-07 16:39:12.609121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.032 [2024-06-07 16:39:12.609129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.032 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.609333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.609341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.609728] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.609736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.610099] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.610106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.610430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.610438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.610710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.610718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.611096] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.611103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.611492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.611500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.611882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.611890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.612260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.612268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.612488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.612495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.612902] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.612910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.613310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.613318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.613703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.613711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.614077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.614085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.614479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.614487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.614693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.614701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.614904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.614912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.615305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.615312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.615677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.615685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.616049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.616057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.616429] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.616437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.616793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.616800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.617189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.617196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.617564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.617572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.617936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.617944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.618140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.618149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.618328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.618336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.618633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.618642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.618876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.618884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.619079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.619087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.619338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.619346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.619710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.619718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.619954] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.619962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.620320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.620328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.620651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.620659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.033 [2024-06-07 16:39:12.621047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.033 [2024-06-07 16:39:12.621054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.033 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.621422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.621430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.621876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.621885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.622169] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.622177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.622567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.622575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.622967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.622974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.623338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.623347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.623741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.623749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.624101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.624109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.624476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.624484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.624771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.624778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.625145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.625153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.625545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.625553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.625747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.625755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.626127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.626134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.626502] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.626510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.626689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.626697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.627069] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.627076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.627443] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.627450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.627823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.627831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.628235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.628243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.628671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.628679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.629048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.629055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.034 [2024-06-07 16:39:12.629426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.034 [2024-06-07 16:39:12.629434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.034 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.629804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.629812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.630012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.630021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.630438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.630446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.630821] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.630828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.631221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.631229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.631602] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.631610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.631961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.631969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.632338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.632346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.632772] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.632780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.633148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.633156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.633388] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.633397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.633633] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.633641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.634079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.634087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.634289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.634297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.634543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.634551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.634943] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.634951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.635346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.635354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.635732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.635740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.636110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.636122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.636499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.636506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.636877] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.636885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.637254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.637261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.637531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.035 [2024-06-07 16:39:12.637539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.035 qpair failed and we were unable to recover it. 00:30:46.035 [2024-06-07 16:39:12.637921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.637928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.638320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.638327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.638700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.638708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.638917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.638925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.639308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.639316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.639683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.639690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.640061] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.640069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.640435] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.640443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.640791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.640799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.641188] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.641196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.641561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.641569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.641825] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.641832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.642193] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.642201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.642597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.642605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.642841] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.642848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.643215] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.643222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.643590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.643598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.643986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.643994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.644263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.644271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.644647] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.644655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.644924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.644932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.645289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.645297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.645583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.645591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.645971] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.645979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.646346] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.646354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.646717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.646726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.647092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.647099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.647466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.647474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.647848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.647855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.648251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.648260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.648459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.648467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.648799] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.648806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.649172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.649179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.649531] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.649539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.649910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.649918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.650289] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.036 [2024-06-07 16:39:12.650298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.036 qpair failed and we were unable to recover it. 00:30:46.036 [2024-06-07 16:39:12.650680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.650688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.651043] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.651051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.651422] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.651430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.651807] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.651815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.652019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.652027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.652223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.652231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.652702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.652711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.652892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.652901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.653241] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.653249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.653638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.653646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.653876] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.653884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.654160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.654168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.654536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.654544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.654743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.654751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.654973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.654981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.655219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.655227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.655605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.655614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.656004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.656011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.656378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.656386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.656671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.656679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.657047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.657054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.657452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.657460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.657671] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.657678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.658058] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.658065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.658274] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.658282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.658634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.658642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.659008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.659017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.659382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.659390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.659789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.659797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.660192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.660200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.660572] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.660579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.660766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.660774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.661145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.661152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.661545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.661553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.661913] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.661921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.662285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.662293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.662646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.662653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.037 qpair failed and we were unable to recover it. 00:30:46.037 [2024-06-07 16:39:12.663039] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.037 [2024-06-07 16:39:12.663047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.663276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.663285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.663481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.663491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.663713] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.663721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.664086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.664094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.664463] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.664476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.664534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.664540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.664875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.664883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.665252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.665259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.665645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.665653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.666012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.666020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.666393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.666405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.666589] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.666598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.666950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.666958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.667324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.667333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.667700] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.667708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.668078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.668086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.668472] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.668480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.668869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.668877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.669246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.669253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.669615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.669623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.670014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.670023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.670291] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.670299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.670687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.670695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.671062] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.671070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.671459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.671467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.671844] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.671852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.672228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.672235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.672474] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.672482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.672853] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.672861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.673229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.673237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.673335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.673342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.673682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.673690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.674077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.674085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.674268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.674276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.674542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.674550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.674922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.674929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.675199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.675206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.038 qpair failed and we were unable to recover it. 00:30:46.038 [2024-06-07 16:39:12.675576] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.038 [2024-06-07 16:39:12.675584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.675664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.675670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.676034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.676041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.676421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.676429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.676804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.676813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.677006] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.677014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.677353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.677361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.677562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.677570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.678017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.678025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.678381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.678389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.678586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.678594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.678939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.678946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.679336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.679344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.679694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.679702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.679871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.679880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.680228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.680236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.680599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.680607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.680979] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.680987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.681353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.681361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.681736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.681744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.682100] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.682108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.682484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.682492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.682883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.682890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.683122] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.683131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.683528] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.683536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.683908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.683916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.684267] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.684274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.684632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.684639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.685033] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.685041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.685408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.685416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.685766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.685774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.686145] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.686153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.686636] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.686666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.687045] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.687054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.039 qpair failed and we were unable to recover it. 00:30:46.039 [2024-06-07 16:39:12.687421] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.039 [2024-06-07 16:39:12.687430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.687816] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.687823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.688223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.688231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.688604] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.688612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.689001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.689009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.689399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.689410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.689787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.689796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.690187] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.690196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.690668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.690698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.691079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.691088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.691439] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.691451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.691829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.691836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.692108] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.692116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.692315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.692323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.692570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.692578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.692917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.692924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.693200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.693208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.693265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.693271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.693638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.693646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.693859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.693868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.694209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.694217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.694270] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.694278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.694464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.694472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.694933] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.040 [2024-06-07 16:39:12.694941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.040 qpair failed and we were unable to recover it. 00:30:46.040 [2024-06-07 16:39:12.695197] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.695205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.695567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.695576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.695969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.695977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.696354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.696361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.696789] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.696797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.697157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.697165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.697537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.697545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.697931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.697938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.698327] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.698336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.698637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.698645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.699011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.699018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.699384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.699392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.699781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.699789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.700159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.700167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.700538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.700546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.700747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.700756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.701107] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.701115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.701308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.701317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.701727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.701735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.702008] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.702016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.702211] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.702218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.702570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.702578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.702960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.702967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.703339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.703347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.703709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.703717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.703914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.703922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.704116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.704128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.704506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.704514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.704685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.704693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.705079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.705086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.705452] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.705460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.705837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.705844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.041 [2024-06-07 16:39:12.706238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.041 [2024-06-07 16:39:12.706245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.041 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.706609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.706617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.706989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.706997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.707367] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.707374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.707766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.707774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.708142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.708150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.708515] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.708523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.708865] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.708873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.709070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.709079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.709424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.709432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.709804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.709812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.710180] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.710187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.710382] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.710390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.710747] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.710755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.711131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.711139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.711378] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.711386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.711781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.711789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.712161] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.712169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.712541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.712549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.712909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.712916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.713315] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.713323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.713697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.713706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.714088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.714096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.714369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.714377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.714767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.714776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.714846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.714854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.715182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.715190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.715558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.715566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.715929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.715937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.716334] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.716342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.716543] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.716551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.716915] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.716923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.717282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.717290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.717652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.717660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.717972] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.717982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.718340] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.718347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.718750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.718758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.042 [2024-06-07 16:39:12.719148] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.042 [2024-06-07 16:39:12.719156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.042 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.719510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.719517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.719901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.719909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.720228] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.720236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.720625] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.720633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.720998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.721006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.721366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.721373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.721570] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.721578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.721905] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.721913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.722285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.722292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.722682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.722690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.722922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.722930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.723159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.723167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.723534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.723542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.723740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.723747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.724121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.724129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.724527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.724535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.724899] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.724907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.725359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.725367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.725736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.725743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.726131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.726138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.726337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.726345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.726680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.726688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.727066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.727074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.727517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.727525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.727883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.727891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.728250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.728258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.728624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.728632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.728906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.728914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.729290] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.729299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.729686] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.729694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.729904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.729912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.730301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.730309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.730699] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.730706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.731084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.731092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.731542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.731550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.731907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.731915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.043 qpair failed and we were unable to recover it. 00:30:46.043 [2024-06-07 16:39:12.732001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.043 [2024-06-07 16:39:12.732011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.732168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.732175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.732358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.732366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.732729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.732737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.732936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.732944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.733227] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.733235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.733609] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.733617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.734020] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.734028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.734385] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.734392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.734623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.734631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.734998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.735005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.735396] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.735410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.735592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.735600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.735981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.735989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.736361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.736369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.736578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.736586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.736924] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.736932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.737142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.737149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.737478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.737486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.737887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.737894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.738266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.738274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.738642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.738650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.739017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.739025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.739413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.739421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.739667] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.739674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.740041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.740051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.740409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.740417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.740670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.740680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.741034] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.741042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.741398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.741409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.741785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.741793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.742191] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.742199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.742508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.742516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.742781] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.742789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.742987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.742996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.743243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.743251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.743484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.743492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.044 [2024-06-07 16:39:12.743561] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.044 [2024-06-07 16:39:12.743567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.044 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.743862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.743870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.744318] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.744325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.744485] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.744493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.744832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.744840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.745207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.745215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.745583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.745590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.745780] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.745787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.746116] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.746123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.746495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.746503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.746867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.746875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.747266] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.747273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.747627] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.747635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.747911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.747918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.748286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.748293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.748652] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.748660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.749030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.749039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.749410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.749418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.749798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.749806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.750199] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.750206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.750582] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.750590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.750956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.750963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.751329] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.751337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.751692] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.751701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.752067] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.752074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.752375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.752382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.752757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.752765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.752833] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.752841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.753177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.753184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.753597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.753605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.753964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.753973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.754339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.045 [2024-06-07 16:39:12.754347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.045 qpair failed and we were unable to recover it. 00:30:46.045 [2024-06-07 16:39:12.754555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.754563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.754813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.754821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.755190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.755197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.755393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.755404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.755776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.755783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.756214] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.756222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.756592] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.756600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.756975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.756982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.757184] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.757192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.757591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.757598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.757976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.757983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.758338] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.758346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.758583] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.758592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.758815] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.758823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.759168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.759176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.759536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.759544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.759754] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.759762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.760153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.760160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.760529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.760537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.760917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.760925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.761321] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.761329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.761708] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.761715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.762075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.762083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.762451] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.762458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.762796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.762804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.763173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.763182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.763556] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.763564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.763931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.763939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.764152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.764159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.764491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.764499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.764707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.764714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.765111] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.765119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.765476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.765484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.765860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.765867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.766237] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.766245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.766612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.766620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.767012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.046 [2024-06-07 16:39:12.767020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.046 qpair failed and we were unable to recover it. 00:30:46.046 [2024-06-07 16:39:12.767284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.767292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.767675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.767684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.768048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.768056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.768462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.768477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.768845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.768852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.769229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.769237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.769466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.769473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.769803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.769810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.770177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.770185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.770508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.770515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.770879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.770887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.771280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.771287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.771683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.771691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.771887] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.771895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.772136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.772144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.772343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.772353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.772740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.772747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.773120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.773127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.773493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.773501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.773882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.773890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.774257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.774266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.774628] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.774635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.774779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.774787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.775158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.775165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.775426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.775434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.775715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.775723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.775993] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.776001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.776390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.776398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.776649] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.776657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.776832] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.776840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.777080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.777087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.777453] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.777461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.777826] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.777834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.778028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.778036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.778399] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.778410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.778766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.778774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.778970] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.778979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.047 qpair failed and we were unable to recover it. 00:30:46.047 [2024-06-07 16:39:12.779341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.047 [2024-06-07 16:39:12.779349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.779723] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.779730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.780115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.780124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.780500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.780507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.780892] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.780901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.781172] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.781179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.781563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.781571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.781791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.781799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.782176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.782183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.782547] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.782555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.782761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.782769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.783146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.783154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.783526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.783534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.783911] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.783918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.784313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.784321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.784697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.784705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.785078] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.785085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.785454] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.785462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.785861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.785869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.786076] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.786084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.786482] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.786489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.786859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.786866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.787261] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.787269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.787685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.787693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.787751] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.787757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.788095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.788102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.788496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.788504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.788725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.788733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.789048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.789056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.789375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.789383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.789828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.789836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.790042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.790049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.790434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.790442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.790810] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.790818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.791032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.791040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.791406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.791414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.791606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.791614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.791957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.048 [2024-06-07 16:39:12.791965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.048 qpair failed and we were unable to recover it. 00:30:46.048 [2024-06-07 16:39:12.792335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.792344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.792719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.792727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.793095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.793103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.793473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.793480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.793757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.793765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.794141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.794149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.794514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.794523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.794898] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.794906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.795273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.795282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.795677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.795686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.796052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.796059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.796434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.796442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.796790] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.796797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.797190] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.797198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.797566] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.797573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.797766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.797774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.798004] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.798012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.798246] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.798253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.798611] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.798618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.798811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.798819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.799150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.799157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.799546] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.799554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.799922] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.799930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.800301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.800308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.800703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.800711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.801103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.801111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.801480] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.801488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.801859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.801867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.802075] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.802083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.802337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.802345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.802710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.802718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.802919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.049 [2024-06-07 16:39:12.802926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.049 qpair failed and we were unable to recover it. 00:30:46.049 [2024-06-07 16:39:12.803177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.803185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.803387] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.803394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.803748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.803755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.803957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.803965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.804301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.804308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.804736] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.804745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.805103] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.805111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.805341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.805349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.805717] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.805725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.806115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.806122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.806484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.806492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.806843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.806851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.807223] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.807231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.807617] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.807625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.808009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.808019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.808398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.808412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.808748] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.808756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.809144] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.809152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.809384] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.809392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.809755] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.809763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.810132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.810140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.810598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.810627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.810834] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.810843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.811240] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.811248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.811677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.811685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.812080] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.812089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.812461] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.812469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.812845] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.812853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.813056] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.813065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.813408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.813416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.813802] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.813810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.814017] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.814025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.814276] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.814284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.814679] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.814688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.814884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.814892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.815257] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.815265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.815638] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.815646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.050 qpair failed and we were unable to recover it. 00:30:46.050 [2024-06-07 16:39:12.816040] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.050 [2024-06-07 16:39:12.816047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.816450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.816459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.816842] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.816851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.817243] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.817252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.817758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.817787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.818182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.818192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.818621] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.818650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.818871] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.818880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.819101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.819109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.819400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.819413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.819616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.819625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.819982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.819990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.820364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.820371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.820575] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.820583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.820982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.820990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.821355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.821363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.821743] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.821751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.822143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.822154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.822530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.822539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.822929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.822937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.823134] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.823142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.823519] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.823527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.823746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.823756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.823928] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.823938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.824343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.824350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.824630] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.824638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.825011] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.825019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.825393] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.825404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.825793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.825800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.826170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.826178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.826553] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.826561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.826767] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.826775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.827124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.827131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.827365] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.827373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.827749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.827757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.828127] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.828135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.828483] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.828492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.828868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.051 [2024-06-07 16:39:12.828875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.051 qpair failed and we were unable to recover it. 00:30:46.051 [2024-06-07 16:39:12.829208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.829215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.829406] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.829415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.829752] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.829761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.830129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.830136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.830504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.830512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.830889] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.830897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.831301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.831309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.831678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.831686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.832053] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.832062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.832268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.832277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.832658] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.832667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.832737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.832744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.832995] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.833002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.833304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.833311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.833585] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.833593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.833870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.833878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.834252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.834260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.834629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.834637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.835003] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.835011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.835282] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.835292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.835490] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.835500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.835668] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.835676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.836016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.836025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.836415] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.836423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.836793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.836801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.836859] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.836865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.837182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.837190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.837428] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.837436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.837828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.837837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.838202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.838211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.838580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.838588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.838982] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.838991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.839339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.839348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.839719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.839728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.840095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.840103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.840478] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.840486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.840894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.840902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.052 qpair failed and we were unable to recover it. 00:30:46.052 [2024-06-07 16:39:12.841273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.052 [2024-06-07 16:39:12.841281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.841673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.841681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.841914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.841922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.842286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.842295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.842684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.842693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.843016] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.843024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.843278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.843286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.843646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.843655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.844044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.844052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.844420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.844429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.844804] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.844812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.845009] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.845017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.845308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.845317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.845536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.845544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.845912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.845920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.846309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.846317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.846709] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.846717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.847086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.847094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.847459] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.847468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.847828] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.847836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.848117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.848125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.848476] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.848485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.848854] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.848863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.849255] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.849263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.849462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.849470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.849809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.849817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.850014] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.850023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.850364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.850372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.850792] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.850801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.851158] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.851166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.851537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.851545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.851741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.851749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.852129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.852137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.852395] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.852409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.852765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.852772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.853002] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.853011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.853390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.853399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.853787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.853795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.053 qpair failed and we were unable to recover it. 00:30:46.053 [2024-06-07 16:39:12.853991] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.053 [2024-06-07 16:39:12.853999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.854251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.854259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.854426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.854434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.854793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.854800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.854992] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.855000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.855336] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.855343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.855534] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.855543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.855881] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.855889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.856143] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.856151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.856542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.856550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.856917] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.856925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.857296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.857304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.857690] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.857698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.858084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.858092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.858326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.858334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.858707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.858715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.859079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.859087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.859479] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.859488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.859862] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.859870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.860265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.860273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.860629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.860637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.861028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.861035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.861408] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.861416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.861606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.861615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.861950] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.861959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.862348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.862356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.862718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.862726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.863098] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.863106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.863301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.863310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.863659] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.863668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.864028] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.864038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.864232] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.864240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.864527] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.054 [2024-06-07 16:39:12.864535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.054 qpair failed and we were unable to recover it. 00:30:46.054 [2024-06-07 16:39:12.864729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.864736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.865159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.865166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.865423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.865430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.865805] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.865813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.866037] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.866045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.866441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.866449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.866837] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.866845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.867219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.867227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.867648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.867657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.868031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.868039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.868417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.868425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.868796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.868804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.869208] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.869215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.055 [2024-06-07 16:39:12.869591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.055 [2024-06-07 16:39:12.869599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.055 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.869969] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.869979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.870209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.870218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.870525] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.870533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.870900] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.870909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.871273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.871281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.871676] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.871684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.872081] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.872088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.872455] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.872464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.872870] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.872878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.873150] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.873159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.873366] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.873374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.873610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.873619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.873990] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.873998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.874369] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.874377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.874771] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.874779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.875159] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.875167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.322 qpair failed and we were unable to recover it. 00:30:46.322 [2024-06-07 16:39:12.875526] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.322 [2024-06-07 16:39:12.875534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.875906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.875916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.876313] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.876321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.876596] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.876604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.876988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.876996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.877343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.877352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.877750] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.877758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.878131] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.878139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.878331] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.878340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.878694] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.878703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.879095] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.879103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.879301] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.879309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.879653] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.879661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.880030] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.880038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.880235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.880244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.880300] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.880307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.880673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.880682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.881048] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.881057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.881423] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.881431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.881793] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.881802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.882168] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.882176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.882567] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.882575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.882910] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.882918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.883310] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.883318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.883687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.883697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.884063] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.884071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.884280] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.884289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.884657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.884666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.884875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.884883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.885216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.885224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.885505] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.885514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.885846] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.885855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.886252] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.886260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.886634] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.886642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.886961] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.886969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.887335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.887343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.887714] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.887722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.888092] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.888100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.888458] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.888467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.888836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.323 [2024-06-07 16:39:12.888843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.323 qpair failed and we were unable to recover it. 00:30:46.323 [2024-06-07 16:39:12.889207] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.889215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.889447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.889457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.889829] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.889836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.890106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.890115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.890493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.890501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.890890] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.890898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.891273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.891280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.891542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.891551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.891725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.891735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.892115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.892123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.892198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.892205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.892565] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.892573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.892936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.892944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.893390] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.893398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.893757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.893765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.893997] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.894005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.894445] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.894453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.894813] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.894821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.895177] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.895185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.895555] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.895563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.895931] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.895939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.896307] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.896315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.896666] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.896674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.897064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.897073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.897433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.897442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.897786] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.897794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.897864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.897870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.898124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.898132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.898504] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.898513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.898907] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.898915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.899113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.899121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.899484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.899493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.899867] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.899875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.900281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.900289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.900657] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.900666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.900861] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.900869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.901200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.901208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.901580] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.901588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.901964] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.901972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.902349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.902357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.902562] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.902570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.902776] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.902786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.903153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.903162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.903438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.324 [2024-06-07 16:39:12.903447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.324 qpair failed and we were unable to recover it. 00:30:46.324 [2024-06-07 16:39:12.903516] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.903524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.903764] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.903772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.903981] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.903988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.904209] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.904216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.904587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.904595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.904956] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.904964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.905339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.905347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.905707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.905715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.905986] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.905995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.906235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.906243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.906433] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.906441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.906797] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.906805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.907041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.907050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.907464] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.907474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.907864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.907872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.908264] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.908272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.908488] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.908497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.908757] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.908765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.909129] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.909137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.909417] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.909426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.909687] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.909695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.909967] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.909975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.910348] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.910357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.910740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.910748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.911123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.911133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.911343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.911351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.911601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.911610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.912010] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.912018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.912407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.912416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.912685] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.912694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.913066] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.913074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.913473] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.913481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.913879] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.913887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.914250] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.914259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.914635] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.914646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.915038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.915046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.915287] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.915296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.915737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.915746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.916123] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.916131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.916545] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.916553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.916942] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.916950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.917324] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.917331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.917569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.917577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.325 [2024-06-07 16:39:12.917941] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.325 [2024-06-07 16:39:12.917949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.325 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.918320] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.918328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.918670] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.918678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.919042] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.919050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.919441] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.919450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.919725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.919733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.920105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.920113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.920484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.920492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.920872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.920880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.921281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.921289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.921721] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.921729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.922104] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.922112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.922508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.922516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.922930] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.922938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.923304] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.923312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.923496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.923504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.923683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.923691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.924018] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.924027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.924182] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.924191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.924587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.924595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.924948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.924956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.925357] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.925367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.925740] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.925748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.926189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.926197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.926573] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.926581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.926768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.926777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.927162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.927170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.927358] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.927366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.927703] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.927713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.927912] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.927920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.928110] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.928118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.928296] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.928304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.928680] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.928690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.929044] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.929052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.929430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.929438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.929768] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.929776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.929976] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.326 [2024-06-07 16:39:12.929983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.326 qpair failed and we were unable to recover it. 00:30:46.326 [2024-06-07 16:39:12.930360] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.930368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.930738] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.930746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.930944] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.930951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.931326] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.931334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.931707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.931715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.932084] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.932092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.932469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.932477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.932869] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.932877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.933079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.933088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.933462] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.933470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.933539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.933544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.933926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.933935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.934302] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.934310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.934681] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.934689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.935055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.935063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.935381] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.935389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.935775] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.935784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.936146] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.936155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.936523] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.936532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.936908] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.936917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.937283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.937292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.937586] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.937594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.937952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.937960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.938312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.938320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.938678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.938687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.939086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.939094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.939293] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.939301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.939779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.939787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.940162] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.940170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.940569] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.940578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.940973] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.940981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.941354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.941362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.941432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.941438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.941744] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.941752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.942101] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.942109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.942345] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.942353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.942579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.942588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.942794] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.942801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.943170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.943179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.943563] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.943572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.943843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.943851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.944222] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.944231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.944606] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.944614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.327 [2024-06-07 16:39:12.945000] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.327 [2024-06-07 16:39:12.945008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.327 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.945375] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.945383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.945765] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.945775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.946170] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.946178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.946446] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.946454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.946864] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.946872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.947247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.947256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.947715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.947724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.947932] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.947941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.948106] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.948115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.948508] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.948517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.948909] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.948918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.949380] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.949388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.949779] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.949787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.950027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.950035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.950263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.950271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.950351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.950358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.950759] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.950767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.951175] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.951183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.951332] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.951340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.951673] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.951681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.952041] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.952051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.952247] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.952255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.952587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.952595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.952977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.952984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.953354] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.953362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.953732] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.953739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.954109] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.954117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.954499] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.954507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.954885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.954893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.955286] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.955295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.955684] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.955693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.956086] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.956094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.956466] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.956474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.956860] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.956868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.957253] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.957261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.957646] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.957654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.957819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.957828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.958031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.958039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.958229] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.958236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.958655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.958663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.958901] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.958909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.959038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.959045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.959219] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.959227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.959548] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.328 [2024-06-07 16:39:12.959556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.328 qpair failed and we were unable to recover it. 00:30:46.328 [2024-06-07 16:39:12.959959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.959967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.960344] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.960352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.960733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.960741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.961118] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.961128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.961524] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.961532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.961906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.961914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.962283] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.962292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.962662] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.962671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.962983] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.962990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.963249] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.963257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.963506] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.963514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.963893] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.963900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.964102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.964111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.964467] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.964475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.964808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.964816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.965192] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.965201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.965601] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.965612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.965988] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.965996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.966394] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.966404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.966803] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.966811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.967173] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.967180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.967574] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.967582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.967818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.967826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.968031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.968039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.968413] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.968422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.968785] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.968793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.969160] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.969168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.969537] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.969545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.969935] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.969943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.970407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.970414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.970597] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.970606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.970791] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.970799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.971151] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.971159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.971536] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.971543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.971975] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.971983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.972353] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.972360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.972760] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.972768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.973136] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.973143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.973511] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.973519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.973906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.973914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.974226] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.974233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.974432] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.974440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.329 [2024-06-07 16:39:12.974808] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.329 [2024-06-07 16:39:12.974816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.329 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.975139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.975146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.975517] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.975525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.975904] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.975911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.976308] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.976316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.976591] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.976605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.976999] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.977006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.977216] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.977224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.977610] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.977619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.977689] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.977695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.978032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.978040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.978260] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.978268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.978707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.978715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.978923] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.978930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.979273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.979283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.979642] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.979649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.980038] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.980047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.980416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.980425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.980599] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.980606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.980886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.980894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.981265] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.981273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.981542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.981549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.981883] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.981891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.982288] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.982295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.982693] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.982700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.983068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.983076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.983440] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.983448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:46.330 [2024-06-07 16:39:12.983847] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.983857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:30:46.330 [2024-06-07 16:39:12.984225] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.984234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.330 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:46.330 [2024-06-07 16:39:12.984598] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.984606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 16:39:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.330 [2024-06-07 16:39:12.985050] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.985058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.985416] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.330 [2024-06-07 16:39:12.985425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.330 qpair failed and we were unable to recover it. 00:30:46.330 [2024-06-07 16:39:12.985798] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.985805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.986077] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.986084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.986140] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.986148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.986322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.986330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.986777] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.986784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.987132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.987139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.987407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.987414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.987761] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.987768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.987936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.987943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.988295] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.988302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.988503] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.988510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.988851] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.988860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.989251] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.989260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.989624] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.989631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.989980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.989987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.990337] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.990344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.990726] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.990734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.991113] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.991120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.991469] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.991477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.991839] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.991846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.992221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.992230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.992493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.992501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.992885] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.992892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.993238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.993245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.993612] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.993620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.993980] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.993987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.994349] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.994356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.994550] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.994560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.994936] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.994945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.995325] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.995332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.995719] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.995726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.995921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.995928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.996189] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.996196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.996564] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.996572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.996921] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.996929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.997132] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.997138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.997343] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.997349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.997539] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.331 [2024-06-07 16:39:12.997546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.331 qpair failed and we were unable to recover it. 00:30:46.331 [2024-06-07 16:39:12.997731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192ce30 is same with the state(5) to be set 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Write completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Write completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Write completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.331 Read completed with error (sct=0, sc=8) 00:30:46.331 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Write completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Write completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Write completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Write completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Write completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 Read completed with error (sct=0, sc=8) 00:30:46.332 starting I/O failed 00:30:46.332 [2024-06-07 16:39:12.998497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.332 [2024-06-07 16:39:12.999070] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:12.999111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f270 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:12.999632] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:12.999660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.000031] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.000040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.000309] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.000316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.000648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.000657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.001059] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.001067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.001409] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.001416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.001773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.001780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.002126] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.002135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.002518] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.002525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.002894] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.002901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.003271] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.003279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.003558] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.003565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.003968] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.003975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.004154] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.004161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.004489] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.004498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.004868] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.004875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.005221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.005229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.005605] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.005612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.005959] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.005967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.006333] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.006340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.006710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.006716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.006872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.006879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.007102] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.007110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.007426] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.007434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.007800] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.007807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.008032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.008040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.008430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.008437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.008818] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.008827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.009268] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.009275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.009468] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.009475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.009843] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.009850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.010205] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.010211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.010587] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.332 [2024-06-07 16:39:13.010594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.332 qpair failed and we were unable to recover it. 00:30:46.332 [2024-06-07 16:39:13.010948] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.010955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.011339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.011345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.011557] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.011566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.011929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.011936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.012130] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.012138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.012514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.012521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.012725] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.012732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.013115] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.013125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.013398] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.013411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.013766] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.013773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.014052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.014059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.014447] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.014456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.014836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.014843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.015213] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.015221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.015594] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.015600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.015987] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.015994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.016335] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.016342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.016718] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.016725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.017019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.017026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.017400] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.017412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.017763] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.017770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.018121] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.018130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.018493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.018502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.018884] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.018892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.019285] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.019292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.019691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.019698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.020049] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.020057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.020475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.020482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.020848] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.020855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.021202] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.021209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.021600] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.021607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.021952] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.021960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.022230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.022237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.022448] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.022456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.333 [2024-06-07 16:39:13.022645] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.022663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.023027] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.023035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.333 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.333 [2024-06-07 16:39:13.023430] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.023439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.333 [2024-06-07 16:39:13.023801] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.023809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.024157] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.024163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.024514] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.024521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.024823] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.024830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.025221] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.025228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.333 [2024-06-07 16:39:13.025498] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.333 [2024-06-07 16:39:13.025505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.333 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.025866] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.025874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.026055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.026062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.026391] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.026399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.026773] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.026781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.026946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.026953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.027341] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.027349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.027707] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.027714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.028068] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.028075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.028284] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.028291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.028648] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.028656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.029047] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.029054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.029312] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.029319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.029706] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.029713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.030088] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.030095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.030495] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.030503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.030882] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.030890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.031254] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.031263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.031629] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.031636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.032001] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.032007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.032361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.032369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.032737] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.032743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.033133] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.033140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.033509] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.033516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.033920] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.033927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.034139] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.034146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.034510] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.034517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.034796] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.034802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.035235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.035242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.035614] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.035621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.035811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.035819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.036200] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.036207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.036410] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.036420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.036675] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.036681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.037060] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.037067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.037124] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.037131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.037475] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.037482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.334 [2024-06-07 16:39:13.037665] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.334 [2024-06-07 16:39:13.037671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.334 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.037916] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.037923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.038064] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.038071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.038278] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.038285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.038438] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.038445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 Malloc0 00:30:46.335 [2024-06-07 16:39:13.038809] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.038816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.335 [2024-06-07 16:39:13.039212] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.039219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:46.335 [2024-06-07 16:39:13.039590] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.039597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.335 [2024-06-07 16:39:13.039946] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.039953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.335 [2024-06-07 16:39:13.040142] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.040150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.040530] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.040537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.040906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.040912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.041262] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.041269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.041484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.041490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.041886] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.041893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.042256] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.042263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.042541] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.042548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.042939] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.042946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.043342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.043349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.043734] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.043741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.044079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.044086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.044351] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.044358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.044715] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.044722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.044949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.044956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.045306] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.045313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.045682] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.045689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.045854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.335 [2024-06-07 16:39:13.045914] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.045922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.046281] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.046288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.046500] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.046507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.046741] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.046748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.047117] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.047124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.047493] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.047500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.047878] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.047885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.048238] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.048245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.048620] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.048627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.049019] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.049025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.049424] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.049432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.049655] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.049662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.050032] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.050040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.050407] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.050414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.050788] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.050796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.051163] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.051170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.051361] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.051369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.051819] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.051826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.335 [2024-06-07 16:39:13.052195] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.335 [2024-06-07 16:39:13.052202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.335 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.052484] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.052492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.052749] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.052755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.052978] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.052985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.053233] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.053240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.053622] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.053629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.053977] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.053984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.054355] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.054363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.054746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.054754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.336 [2024-06-07 16:39:13.054960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.054967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.055174] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.055182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.336 [2024-06-07 16:39:13.055316] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.055323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.336 [2024-06-07 16:39:13.055678] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.055685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.336 [2024-06-07 16:39:13.056052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.056059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.056311] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.056318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.056691] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.056698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.057052] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.057059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.057425] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.057433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.057735] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.057741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.058153] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.058160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.058529] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.058536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.058929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.058936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.059328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.059335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.059702] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.059709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.060055] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.060062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.060420] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.060428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.060542] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.060550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.060787] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.060793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.061079] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.061086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.061176] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.061183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.061538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.061545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.061926] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.061934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.062305] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.062312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.062677] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.062685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.062957] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.062964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.063198] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.063205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.063578] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.063585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.063811] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.063819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.064235] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.064243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.064615] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.064623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.064874] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.064880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.065273] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.065281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.065643] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.065650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.336 [2024-06-07 16:39:13.065998] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.336 [2024-06-07 16:39:13.066006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.336 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.066230] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.066237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.066579] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.066586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.337 [2024-06-07 16:39:13.066963] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.066971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.337 [2024-06-07 16:39:13.067342] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.067349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.337 [2024-06-07 16:39:13.067733] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.067740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.337 [2024-06-07 16:39:13.068012] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.068018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.068434] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.068441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.068697] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.068704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.069166] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.069173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.069481] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.069487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.069836] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.069843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.070120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.070127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.070359] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.070366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.070746] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.070753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.071155] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.071162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.071538] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.071544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.071949] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.071956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.072322] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.072329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.072710] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.072718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.072875] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.072882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.073141] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.073150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.073397] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.073417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.073616] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.073623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.074026] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.074035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.074450] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.074457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.074644] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.074651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.074989] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.074996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.075364] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.075371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.075739] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.075745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.076120] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.076126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.076339] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.076346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.076729] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.076736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.076929] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.076936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.077328] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.077335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.077664] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.077671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.077872] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.077880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.078263] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.078270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.078651] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.078659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.078960] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.078968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.337 [2024-06-07 16:39:13.079218] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.079226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.337 [2024-06-07 16:39:13.079683] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.079711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.337 [2024-06-07 16:39:13.080152] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.080161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.080623] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.080658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.081082] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.081091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.081637] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.081664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.082105] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.082118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.082492] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.337 [2024-06-07 16:39:13.082500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.337 qpair failed and we were unable to recover it. 00:30:46.337 [2024-06-07 16:39:13.082727] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.082734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.083097] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.083104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.083496] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.083503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.083919] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.083925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.084330] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.084336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.084758] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.084765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.085185] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.085192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.085491] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.085497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.085906] posix.c:1046:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.338 [2024-06-07 16:39:13.085913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f61d8000b90 with addr=10.0.0.2, port=4420 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.086120] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.338 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.338 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:46.338 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.338 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.338 [2024-06-07 16:39:13.096782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.096865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.096880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.096885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.096890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.096904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.338 16:39:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3300385 00:30:46.338 [2024-06-07 16:39:13.106672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.106737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.106749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.106755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.106759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.106769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.116663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.116728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.116740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.116745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.116750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.116760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.126643] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.126708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.126720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.126725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.126729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.126739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.136731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.136801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.136815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.136820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.136824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.136835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.146704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.146767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.146780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.146785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.146789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.146799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.156624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.156690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.156703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.156708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.156712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.156724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.338 [2024-06-07 16:39:13.166745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.338 [2024-06-07 16:39:13.166808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.338 [2024-06-07 16:39:13.166821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.338 [2024-06-07 16:39:13.166826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.338 [2024-06-07 16:39:13.166830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.338 [2024-06-07 16:39:13.166841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.338 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.176743] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.176814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.176827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.176832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.176836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.176850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.186811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.186881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.186894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.186899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.186903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.186914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.196807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.196865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.196877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.196882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.196886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.196897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.206855] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.206919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.206931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.206936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.206940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.206951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.216910] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.216983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.216996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.217001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.217005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.217016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.226906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.226966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.226981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.226986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.226991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.227002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.236810] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.236874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.236886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.236891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.236895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.236906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.246921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.247060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.599 [2024-06-07 16:39:13.247072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.599 [2024-06-07 16:39:13.247077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.599 [2024-06-07 16:39:13.247082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.599 [2024-06-07 16:39:13.247093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.599 qpair failed and we were unable to recover it. 00:30:46.599 [2024-06-07 16:39:13.257035] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.599 [2024-06-07 16:39:13.257108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.257127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.257133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.257138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.257152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.267040] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.267148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.267167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.267173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.267181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.267195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.277034] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.277105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.277124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.277130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.277135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.277149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.287049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.287111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.287125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.287130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.287134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.287146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.297080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.297149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.297168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.297174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.297179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.297193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.307126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.307236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.307255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.307261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.307266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.307281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.317158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.317225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.317238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.317244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.317248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.317260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.327143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.327241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.327254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.327260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.327264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.327275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.337201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.337265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.337277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.337282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.337287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.337298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.347287] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.347346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.347358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.347363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.347368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.347379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.357251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.357310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.357321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.357326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.357334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.357345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.367428] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.367500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.367512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.367517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.367522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.367532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.377368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.377442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.377455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.377460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.377464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.377474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.387408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.387469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.387481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.387486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.387491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.387501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.397440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.397504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.397516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.397521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.397525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.397536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.407371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.407437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.407449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.407455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.407459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.407470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.417365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.417439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.417451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.417457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.417461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.417473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.427319] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.427381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.427393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.427398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.427406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.427417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.437470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.437531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.437543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.437549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.437553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.437564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.600 [2024-06-07 16:39:13.447389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.600 [2024-06-07 16:39:13.447459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.600 [2024-06-07 16:39:13.447473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.600 [2024-06-07 16:39:13.447481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.600 [2024-06-07 16:39:13.447485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.600 [2024-06-07 16:39:13.447496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.600 qpair failed and we were unable to recover it. 00:30:46.862 [2024-06-07 16:39:13.457540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.862 [2024-06-07 16:39:13.457606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.862 [2024-06-07 16:39:13.457618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.862 [2024-06-07 16:39:13.457624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.862 [2024-06-07 16:39:13.457628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.862 [2024-06-07 16:39:13.457639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.862 qpair failed and we were unable to recover it. 00:30:46.862 [2024-06-07 16:39:13.467578] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.862 [2024-06-07 16:39:13.467639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.862 [2024-06-07 16:39:13.467651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.862 [2024-06-07 16:39:13.467656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.862 [2024-06-07 16:39:13.467661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.862 [2024-06-07 16:39:13.467671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.862 qpair failed and we were unable to recover it. 00:30:46.862 [2024-06-07 16:39:13.477547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.862 [2024-06-07 16:39:13.477608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.862 [2024-06-07 16:39:13.477620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.862 [2024-06-07 16:39:13.477625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.862 [2024-06-07 16:39:13.477630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.862 [2024-06-07 16:39:13.477641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.862 qpair failed and we were unable to recover it. 00:30:46.862 [2024-06-07 16:39:13.487618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.862 [2024-06-07 16:39:13.487682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.487694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.487699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.487704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.487714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.497599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.497665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.497677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.497683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.497687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.497699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.507681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.507744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.507756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.507761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.507766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.507777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.517724] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.517859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.517872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.517877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.517882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.517893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.527721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.527786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.527798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.527803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.527808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.527818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.537777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.537846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.537861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.537867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.537871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.537882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.547790] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.547847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.547859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.547865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.547869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.547880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.557694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.557756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.557768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.557773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.557778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.557789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.567847] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.567909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.567922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.567927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.567931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.567942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.577826] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.577897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.577909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.577914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.577918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.577935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.863 [2024-06-07 16:39:13.587891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.863 [2024-06-07 16:39:13.587951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.863 [2024-06-07 16:39:13.587964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.863 [2024-06-07 16:39:13.587969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.863 [2024-06-07 16:39:13.587974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.863 [2024-06-07 16:39:13.587984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.863 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.597926] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.597986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.597998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.598003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.598007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.598018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.607974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.608036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.608047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.608053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.608057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.608068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.617981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.618047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.618058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.618064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.618068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.618079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.627988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.628052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.628074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.628080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.628085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.628100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.638019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.638076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.638089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.638095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.638100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.638111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.648047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.648114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.648133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.648139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.648144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.648158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.658092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.658168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.658187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.658194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.658199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.658213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.668103] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.668168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.668187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.668193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.668198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.668216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.678128] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.678190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.678209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.678215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.678220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.678233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.688165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.688226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.688240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.688246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.688250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.688261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.698186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.698253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.698265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.698270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.698275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.698285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:46.864 [2024-06-07 16:39:13.708236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.864 [2024-06-07 16:39:13.708337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.864 [2024-06-07 16:39:13.708349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.864 [2024-06-07 16:39:13.708354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.864 [2024-06-07 16:39:13.708359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:46.864 [2024-06-07 16:39:13.708370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:46.864 qpair failed and we were unable to recover it. 00:30:47.127 [2024-06-07 16:39:13.718238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.127 [2024-06-07 16:39:13.718298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.127 [2024-06-07 16:39:13.718310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.127 [2024-06-07 16:39:13.718315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.127 [2024-06-07 16:39:13.718320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.127 [2024-06-07 16:39:13.718331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.127 qpair failed and we were unable to recover it. 00:30:47.127 [2024-06-07 16:39:13.728290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.127 [2024-06-07 16:39:13.728349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.127 [2024-06-07 16:39:13.728361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.127 [2024-06-07 16:39:13.728367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.127 [2024-06-07 16:39:13.728372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.127 [2024-06-07 16:39:13.728382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.127 qpair failed and we were unable to recover it. 00:30:47.127 [2024-06-07 16:39:13.738299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.127 [2024-06-07 16:39:13.738363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.127 [2024-06-07 16:39:13.738376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.127 [2024-06-07 16:39:13.738381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.127 [2024-06-07 16:39:13.738385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.127 [2024-06-07 16:39:13.738396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.127 qpair failed and we were unable to recover it. 00:30:47.127 [2024-06-07 16:39:13.748318] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.127 [2024-06-07 16:39:13.748377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.127 [2024-06-07 16:39:13.748389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.127 [2024-06-07 16:39:13.748394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.127 [2024-06-07 16:39:13.748398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.127 [2024-06-07 16:39:13.748413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.127 qpair failed and we were unable to recover it. 00:30:47.127 [2024-06-07 16:39:13.758348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.127 [2024-06-07 16:39:13.758408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.127 [2024-06-07 16:39:13.758420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.127 [2024-06-07 16:39:13.758426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.127 [2024-06-07 16:39:13.758433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.127 [2024-06-07 16:39:13.758444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.127 qpair failed and we were unable to recover it. 00:30:47.127 [2024-06-07 16:39:13.768377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.127 [2024-06-07 16:39:13.768445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.127 [2024-06-07 16:39:13.768457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.127 [2024-06-07 16:39:13.768462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.127 [2024-06-07 16:39:13.768467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.768478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.778418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.778486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.778498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.778504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.778508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.778519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.788329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.788388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.788405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.788411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.788416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.788427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.798462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.798522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.798535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.798540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.798545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.798556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.808450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.808524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.808536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.808541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.808546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.808558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.818531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.818634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.818647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.818652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.818656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.818667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.828552] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.828655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.828667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.828672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.828677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.828687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.838579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.838640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.838652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.838658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.838662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.838673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.848624] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.848686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.848698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.848706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.848710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.848721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.858646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.858714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.858726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.858731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.858736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.858747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.868644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.868699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.868711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.868717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.868721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.868731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.878665] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.878721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.878733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.878738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.878742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.878753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.888611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.888673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.888685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.888690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.888695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.888706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.128 qpair failed and we were unable to recover it. 00:30:47.128 [2024-06-07 16:39:13.898731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.128 [2024-06-07 16:39:13.898794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.128 [2024-06-07 16:39:13.898806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.128 [2024-06-07 16:39:13.898811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.128 [2024-06-07 16:39:13.898816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.128 [2024-06-07 16:39:13.898827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.908826] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.908891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.908903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.908908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.908912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.908922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.918813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.918873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.918885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.918890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.918896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.918907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.928814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.928875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.928887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.928892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.928896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.928907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.938834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.938900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.938914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.938920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.938924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.938934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.948775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.948833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.948845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.948850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.948855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.948865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.958889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.958949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.958961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.958966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.958970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.958981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.968914] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.968974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.968986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.968992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.968996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.969007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.129 [2024-06-07 16:39:13.978929] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.129 [2024-06-07 16:39:13.978994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.129 [2024-06-07 16:39:13.979006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.129 [2024-06-07 16:39:13.979011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.129 [2024-06-07 16:39:13.979016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.129 [2024-06-07 16:39:13.979030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.129 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:13.988973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:13.989036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:13.989048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:13.989054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:13.989058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:13.989069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:13.999010] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:13.999067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:13.999081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:13.999086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:13.999090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:13.999102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:14.008921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:14.008994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:14.009007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:14.009013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:14.009017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:14.009028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:14.019062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:14.019125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:14.019138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:14.019143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:14.019148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:14.019158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:14.029072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:14.029140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:14.029163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:14.029169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:14.029174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:14.029188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:14.039122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:14.039187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:14.039206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:14.039212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:14.039217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:14.039230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:14.049139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:14.049205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:14.049223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:14.049230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.393 [2024-06-07 16:39:14.049234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.393 [2024-06-07 16:39:14.049248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.393 qpair failed and we were unable to recover it. 00:30:47.393 [2024-06-07 16:39:14.059165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.393 [2024-06-07 16:39:14.059253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.393 [2024-06-07 16:39:14.059267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.393 [2024-06-07 16:39:14.059273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.059277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.059289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.069189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.069245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.069257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.069263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.069267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.069282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.079203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.079261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.079274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.079279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.079283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.079294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.089136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.089197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.089209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.089214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.089218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.089229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.099152] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.099217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.099229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.099234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.099238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.099249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.109173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.109238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.109250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.109255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.109259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.109270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.119322] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.119380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.119395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.119400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.119409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.119420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.129350] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.129424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.129436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.129441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.129446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.129457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.139391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.139492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.139504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.139509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.139514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.139526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.149419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.394 [2024-06-07 16:39:14.149477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.394 [2024-06-07 16:39:14.149489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.394 [2024-06-07 16:39:14.149494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.394 [2024-06-07 16:39:14.149499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.394 [2024-06-07 16:39:14.149510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.394 qpair failed and we were unable to recover it. 00:30:47.394 [2024-06-07 16:39:14.159434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.159496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.159510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.159515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.159522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.159534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.169462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.169525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.169537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.169542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.169547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.169557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.179485] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.179550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.179562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.179568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.179572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.179583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.189541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.189601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.189613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.189619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.189623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.189634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.199524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.199597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.199609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.199615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.199619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.199630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.209581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.209643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.209655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.209660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.209665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.209675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.219610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.219705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.219717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.219722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.219727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.219737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.229649] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.229707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.229719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.229724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.229729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.229739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.395 [2024-06-07 16:39:14.239692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.395 [2024-06-07 16:39:14.239752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.395 [2024-06-07 16:39:14.239766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.395 [2024-06-07 16:39:14.239771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.395 [2024-06-07 16:39:14.239778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.395 [2024-06-07 16:39:14.239789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.395 qpair failed and we were unable to recover it. 00:30:47.658 [2024-06-07 16:39:14.249713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.658 [2024-06-07 16:39:14.249811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.658 [2024-06-07 16:39:14.249824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.658 [2024-06-07 16:39:14.249837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.658 [2024-06-07 16:39:14.249841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.658 [2024-06-07 16:39:14.249852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.658 qpair failed and we were unable to recover it. 00:30:47.658 [2024-06-07 16:39:14.259731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.658 [2024-06-07 16:39:14.259794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.658 [2024-06-07 16:39:14.259807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.259812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.259817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.259827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.269641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.269697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.269710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.269715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.269720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.269730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.279777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.279840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.279853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.279858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.279863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.279873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.289856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.289919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.289931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.289936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.289941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.289952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.299829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.299897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.299909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.299914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.299919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.299929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.309859] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.309918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.309930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.309935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.309940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.309950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.319915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.319983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.319994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.319999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.320004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.320014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.329923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.329986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.329998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.330004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.330008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.330019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.339953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.340015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.340026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.340034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.340039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.659 [2024-06-07 16:39:14.340049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.659 qpair failed and we were unable to recover it. 00:30:47.659 [2024-06-07 16:39:14.350021] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.659 [2024-06-07 16:39:14.350086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.659 [2024-06-07 16:39:14.350098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.659 [2024-06-07 16:39:14.350104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.659 [2024-06-07 16:39:14.350108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.350119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.360012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.360074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.360085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.360090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.360095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.360106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.369975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.370034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.370046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.370051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.370055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.370066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.380056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.380152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.380164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.380170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.380174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.380185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.390151] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.390218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.390236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.390242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.390248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.390262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.400136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.400241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.400260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.400266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.400271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.400284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.410147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.410209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.410228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.410234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.410239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.410252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.420173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.420348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.420367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.420373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.420378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.420392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.430206] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.430262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.430279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.430284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.430289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.430300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.440226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.660 [2024-06-07 16:39:14.440290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.660 [2024-06-07 16:39:14.440303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.660 [2024-06-07 16:39:14.440308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.660 [2024-06-07 16:39:14.440313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.660 [2024-06-07 16:39:14.440323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.660 qpair failed and we were unable to recover it. 00:30:47.660 [2024-06-07 16:39:14.450265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.661 [2024-06-07 16:39:14.450325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.661 [2024-06-07 16:39:14.450337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.661 [2024-06-07 16:39:14.450342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.661 [2024-06-07 16:39:14.450347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.661 [2024-06-07 16:39:14.450357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.661 qpair failed and we were unable to recover it. 00:30:47.661 [2024-06-07 16:39:14.460292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.661 [2024-06-07 16:39:14.460355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.661 [2024-06-07 16:39:14.460367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.661 [2024-06-07 16:39:14.460372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.661 [2024-06-07 16:39:14.460377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.661 [2024-06-07 16:39:14.460387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.661 qpair failed and we were unable to recover it. 00:30:47.661 [2024-06-07 16:39:14.470317] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.661 [2024-06-07 16:39:14.470407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.661 [2024-06-07 16:39:14.470419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.661 [2024-06-07 16:39:14.470425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.661 [2024-06-07 16:39:14.470429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.661 [2024-06-07 16:39:14.470443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.661 qpair failed and we were unable to recover it. 00:30:47.661 [2024-06-07 16:39:14.480499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.661 [2024-06-07 16:39:14.480605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.661 [2024-06-07 16:39:14.480617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.661 [2024-06-07 16:39:14.480623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.661 [2024-06-07 16:39:14.480627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.661 [2024-06-07 16:39:14.480638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.661 qpair failed and we were unable to recover it. 00:30:47.661 [2024-06-07 16:39:14.490368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.661 [2024-06-07 16:39:14.490449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.661 [2024-06-07 16:39:14.490461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.661 [2024-06-07 16:39:14.490467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.661 [2024-06-07 16:39:14.490472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.661 [2024-06-07 16:39:14.490482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.661 qpair failed and we were unable to recover it. 00:30:47.661 [2024-06-07 16:39:14.500377] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.661 [2024-06-07 16:39:14.500447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.661 [2024-06-07 16:39:14.500459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.661 [2024-06-07 16:39:14.500464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.661 [2024-06-07 16:39:14.500469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.661 [2024-06-07 16:39:14.500479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.661 qpair failed and we were unable to recover it. 00:30:47.924 [2024-06-07 16:39:14.510420] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.924 [2024-06-07 16:39:14.510477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.924 [2024-06-07 16:39:14.510490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.924 [2024-06-07 16:39:14.510495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.924 [2024-06-07 16:39:14.510499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.924 [2024-06-07 16:39:14.510510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.924 qpair failed and we were unable to recover it. 00:30:47.924 [2024-06-07 16:39:14.520459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.924 [2024-06-07 16:39:14.520517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.924 [2024-06-07 16:39:14.520532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.924 [2024-06-07 16:39:14.520537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.924 [2024-06-07 16:39:14.520541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.924 [2024-06-07 16:39:14.520552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.924 qpair failed and we were unable to recover it. 00:30:47.924 [2024-06-07 16:39:14.530461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.924 [2024-06-07 16:39:14.530522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.924 [2024-06-07 16:39:14.530534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.924 [2024-06-07 16:39:14.530539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.924 [2024-06-07 16:39:14.530543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.924 [2024-06-07 16:39:14.530554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.924 qpair failed and we were unable to recover it. 00:30:47.924 [2024-06-07 16:39:14.540526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.924 [2024-06-07 16:39:14.540592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.924 [2024-06-07 16:39:14.540604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.924 [2024-06-07 16:39:14.540610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.924 [2024-06-07 16:39:14.540614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.924 [2024-06-07 16:39:14.540624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.924 qpair failed and we were unable to recover it. 00:30:47.924 [2024-06-07 16:39:14.550559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.924 [2024-06-07 16:39:14.550616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.924 [2024-06-07 16:39:14.550628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.924 [2024-06-07 16:39:14.550633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.924 [2024-06-07 16:39:14.550638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.924 [2024-06-07 16:39:14.550648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.924 qpair failed and we were unable to recover it. 00:30:47.924 [2024-06-07 16:39:14.560585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.924 [2024-06-07 16:39:14.560644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.560656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.560661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.560668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.560679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.570601] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.570662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.570674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.570679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.570684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.570694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.580740] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.580805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.580817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.580823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.580827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.580838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.590789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.590860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.590872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.590877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.590882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.590893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.600676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.600734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.600747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.600752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.600757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.600770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.610701] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.610765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.610778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.610783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.610788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.610798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.620728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.620792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.620804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.620810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.620814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.620825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.630765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.630827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.630839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.630844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.630849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.630859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.640899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.640956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.640968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.640973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.640978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.640988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.650871] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.650968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.650980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.650988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.650993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.651004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.660848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.660911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.660923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.660928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.660933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.660943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.670871] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.670973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.670985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.670991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.670996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.671006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.680918] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.680976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.680988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.680994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.680998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.925 [2024-06-07 16:39:14.681009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.925 qpair failed and we were unable to recover it. 00:30:47.925 [2024-06-07 16:39:14.690920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.925 [2024-06-07 16:39:14.690981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.925 [2024-06-07 16:39:14.690993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.925 [2024-06-07 16:39:14.690998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.925 [2024-06-07 16:39:14.691003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.691013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.700956] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.701025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.701037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.701042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.701047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.701058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.710962] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.711022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.711034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.711039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.711043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.711054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.720996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.721053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.721066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.721071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.721075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.721087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.731096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.731204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.731217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.731222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.731227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.731237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.741093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.741171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.741189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.741198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.741204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.741218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.751135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.751210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.751229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.751235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.751240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.751254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.761139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.761241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.761260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.761267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.761272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.761285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:47.926 [2024-06-07 16:39:14.771205] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.926 [2024-06-07 16:39:14.771265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.926 [2024-06-07 16:39:14.771279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.926 [2024-06-07 16:39:14.771284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.926 [2024-06-07 16:39:14.771289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:47.926 [2024-06-07 16:39:14.771300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:47.926 qpair failed and we were unable to recover it. 00:30:48.188 [2024-06-07 16:39:14.781189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.188 [2024-06-07 16:39:14.781258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.188 [2024-06-07 16:39:14.781271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.188 [2024-06-07 16:39:14.781276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.188 [2024-06-07 16:39:14.781281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.188 [2024-06-07 16:39:14.781292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.188 qpair failed and we were unable to recover it. 00:30:48.188 [2024-06-07 16:39:14.791181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.188 [2024-06-07 16:39:14.791239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.188 [2024-06-07 16:39:14.791251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.188 [2024-06-07 16:39:14.791257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.188 [2024-06-07 16:39:14.791261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.188 [2024-06-07 16:39:14.791272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.188 qpair failed and we were unable to recover it. 00:30:48.188 [2024-06-07 16:39:14.801274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.801331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.801343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.801349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.801354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.801366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.811348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.811448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.811462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.811467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.811472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.811483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.821313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.821415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.821428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.821433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.821437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.821449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.831349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.831414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.831430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.831435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.831439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.831450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.841368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.841431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.841443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.841448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.841453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.841463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.851404] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.851465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.851477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.851482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.851487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.851497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.861421] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.861487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.861499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.861505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.861509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.861521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.871465] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.871524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.871536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.871541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.871545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.871559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.881368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.881458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.881471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.881477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.881482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.881493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.891502] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.891563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.891575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.891581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.891585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.891596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.901525] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.901588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.901600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.901606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.901610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.901621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.911617] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.911694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.911706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.911711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.911716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.911728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.921593] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.921654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.921668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.189 [2024-06-07 16:39:14.921674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.189 [2024-06-07 16:39:14.921678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.189 [2024-06-07 16:39:14.921689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.189 qpair failed and we were unable to recover it. 00:30:48.189 [2024-06-07 16:39:14.931627] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.189 [2024-06-07 16:39:14.931690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.189 [2024-06-07 16:39:14.931702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.931707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.931712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.931722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:14.941677] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:14.941746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:14.941758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.941763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.941768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.941778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:14.951713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:14.951787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:14.951799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.951804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.951809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.951820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:14.961764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:14.961825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:14.961837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.961842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.961853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.961864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:14.971793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:14.971862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:14.971875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.971880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.971885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.971895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:14.981775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:14.981840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:14.981852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.981857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.981862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.981873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:14.991799] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:14.991856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:14.991868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:14.991873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:14.991878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:14.991888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:15.001807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:15.001864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:15.001876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:15.001882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:15.001886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:15.001897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:15.011843] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:15.011905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:15.011917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:15.011922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:15.011927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:15.011938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:15.021907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:15.021973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:15.021986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:15.021991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:15.021996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:15.022006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.190 [2024-06-07 16:39:15.031914] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.190 [2024-06-07 16:39:15.031973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.190 [2024-06-07 16:39:15.031985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.190 [2024-06-07 16:39:15.031991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.190 [2024-06-07 16:39:15.031995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.190 [2024-06-07 16:39:15.032006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.190 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.041951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.454 [2024-06-07 16:39:15.042016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.454 [2024-06-07 16:39:15.042028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.454 [2024-06-07 16:39:15.042033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.454 [2024-06-07 16:39:15.042038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.454 [2024-06-07 16:39:15.042049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.454 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.051961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.454 [2024-06-07 16:39:15.052025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.454 [2024-06-07 16:39:15.052037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.454 [2024-06-07 16:39:15.052043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.454 [2024-06-07 16:39:15.052051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.454 [2024-06-07 16:39:15.052062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.454 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.061974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.454 [2024-06-07 16:39:15.062038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.454 [2024-06-07 16:39:15.062051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.454 [2024-06-07 16:39:15.062056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.454 [2024-06-07 16:39:15.062060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.454 [2024-06-07 16:39:15.062071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.454 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.071992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.454 [2024-06-07 16:39:15.072060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.454 [2024-06-07 16:39:15.072079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.454 [2024-06-07 16:39:15.072085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.454 [2024-06-07 16:39:15.072090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.454 [2024-06-07 16:39:15.072103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.454 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.082073] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.454 [2024-06-07 16:39:15.082136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.454 [2024-06-07 16:39:15.082155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.454 [2024-06-07 16:39:15.082161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.454 [2024-06-07 16:39:15.082166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.454 [2024-06-07 16:39:15.082180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.454 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.092085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.454 [2024-06-07 16:39:15.092148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.454 [2024-06-07 16:39:15.092162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.454 [2024-06-07 16:39:15.092168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.454 [2024-06-07 16:39:15.092172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.454 [2024-06-07 16:39:15.092184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.454 qpair failed and we were unable to recover it. 00:30:48.454 [2024-06-07 16:39:15.102141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.102207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.102220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.102225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.102230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.102240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.112133] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.112201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.112220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.112226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.112231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.112245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.122149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.122214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.122233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.122238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.122243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.122257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.132185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.132250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.132269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.132275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.132280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.132293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.142207] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.142305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.142319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.142329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.142334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.142346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.152231] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.152288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.152302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.152307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.152312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.152325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.162244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.162305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.162317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.162323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.162327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.162338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.172289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.172350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.172362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.172368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.172372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.172383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.182313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.182387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.182399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.182408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.182413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.182424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.192348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.192413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.192425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.192431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.192435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.192446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.202389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.202451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.202463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.202468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.202473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.202483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.212393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.212460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.212472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.212477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.212482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.212492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.222432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.455 [2024-06-07 16:39:15.222498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.455 [2024-06-07 16:39:15.222510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.455 [2024-06-07 16:39:15.222515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.455 [2024-06-07 16:39:15.222519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.455 [2024-06-07 16:39:15.222530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.455 qpair failed and we were unable to recover it. 00:30:48.455 [2024-06-07 16:39:15.232433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.232493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.232508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.232513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.232518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.232528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.242475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.242533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.242546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.242551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.242555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.242566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.252504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.252574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.252585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.252590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.252595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.252605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.262548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.262617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.262629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.262634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.262639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.262650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.272612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.272669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.272681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.272686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.272691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.272704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.282547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.282603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.282615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.282620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.282625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.282635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.292645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.292711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.292723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.292728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.292733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.292744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.456 [2024-06-07 16:39:15.302655] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.456 [2024-06-07 16:39:15.302740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.456 [2024-06-07 16:39:15.302752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.456 [2024-06-07 16:39:15.302757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.456 [2024-06-07 16:39:15.302761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.456 [2024-06-07 16:39:15.302772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.456 qpair failed and we were unable to recover it. 00:30:48.718 [2024-06-07 16:39:15.312673] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.718 [2024-06-07 16:39:15.312733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.718 [2024-06-07 16:39:15.312745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.718 [2024-06-07 16:39:15.312750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.718 [2024-06-07 16:39:15.312755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.718 [2024-06-07 16:39:15.312765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.718 qpair failed and we were unable to recover it. 00:30:48.718 [2024-06-07 16:39:15.322668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.718 [2024-06-07 16:39:15.322721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.718 [2024-06-07 16:39:15.322737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.718 [2024-06-07 16:39:15.322742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.718 [2024-06-07 16:39:15.322747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.718 [2024-06-07 16:39:15.322757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.718 qpair failed and we were unable to recover it. 00:30:48.718 [2024-06-07 16:39:15.332649] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.718 [2024-06-07 16:39:15.332742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.718 [2024-06-07 16:39:15.332755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.718 [2024-06-07 16:39:15.332760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.718 [2024-06-07 16:39:15.332765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.718 [2024-06-07 16:39:15.332776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.718 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.342637] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.342702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.342714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.342720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.342724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.342735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.352788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.352846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.352858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.352863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.352868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.352878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.362765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.362826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.362838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.362843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.362848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.362861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.372996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.373067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.373079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.373084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.373088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.373099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.382905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.382966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.382978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.382983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.382988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.382998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.392959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.393032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.393045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.393050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.393054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.393064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.402905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.402967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.402979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.402984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.402989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.402999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.412946] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.413011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.413023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.413028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.413033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.413043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.422990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.423054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.423073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.423080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.423085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.423098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.432989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.433097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.433116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.433122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.433128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.433141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.442877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.442930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.442943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.442948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.442954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.442965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.453041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.453138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.453151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.453157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.453166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.453177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.463086] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.463159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.719 [2024-06-07 16:39:15.463177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.719 [2024-06-07 16:39:15.463184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.719 [2024-06-07 16:39:15.463189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.719 [2024-06-07 16:39:15.463203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.719 qpair failed and we were unable to recover it. 00:30:48.719 [2024-06-07 16:39:15.473106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.719 [2024-06-07 16:39:15.473168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.473187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.473193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.473198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.473212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.483100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.483160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.483178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.483185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.483190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.483204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.493161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.493226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.493239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.493245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.493250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.493261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.503186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.503251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.503263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.503269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.503273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.503284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.513209] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.513267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.513280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.513285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.513290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.513301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.523171] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.523228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.523240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.523245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.523250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.523260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.533265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.533329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.533342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.533347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.533352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.533363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.543290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.543351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.543363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.543372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.543376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.543387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.553270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.553324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.553336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.553341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.553345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.553356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.720 [2024-06-07 16:39:15.563314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.720 [2024-06-07 16:39:15.563364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.720 [2024-06-07 16:39:15.563376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.720 [2024-06-07 16:39:15.563381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.720 [2024-06-07 16:39:15.563385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.720 [2024-06-07 16:39:15.563396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.720 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.573391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.573454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.573466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.573472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.573476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.573487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.583398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.583465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.583477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.583482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.583487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.583498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.593391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.593446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.593458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.593464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.593468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.593479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.603420] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.603478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.603490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.603496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.603500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.603512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.613481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.613542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.613554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.613559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.613564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.613575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.623407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.623475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.623488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.623493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.623498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.623509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.633498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.633549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.633564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.633570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.633574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.633585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.643414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.643475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.643487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.643492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.643497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.643508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.653490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.653588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.653601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.653606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.653611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.653622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.663505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.663606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.663619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.663624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.663629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.663639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.673604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.673656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.673668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.673673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.673677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.673691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.983 [2024-06-07 16:39:15.683644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.983 [2024-06-07 16:39:15.683696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.983 [2024-06-07 16:39:15.683708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.983 [2024-06-07 16:39:15.683713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.983 [2024-06-07 16:39:15.683718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.983 [2024-06-07 16:39:15.683729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.983 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.693755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.693864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.693876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.693882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.693886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.693896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.703704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.703812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.703824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.703830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.703834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.703844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.713714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.713777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.713789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.713794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.713798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.713808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.723750] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.723807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.723825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.723830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.723834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.723845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.733690] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.733755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.733767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.733772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.733777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.733787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.743848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.743911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.743922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.743928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.743932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.743942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.753838] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.753898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.753910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.753916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.753920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.753931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.763883] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.763935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.763947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.763952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.763957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.763970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.773931] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.773987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.773999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.774004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.774009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.774019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.783943] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.784004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.784016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.784022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.784026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.784037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.793959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.794013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.794026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.794031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.794035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.794046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.803966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.804023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.804036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.804041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.804046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.804057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.814031] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.814091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.814105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.984 [2024-06-07 16:39:15.814111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.984 [2024-06-07 16:39:15.814115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.984 [2024-06-07 16:39:15.814126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.984 qpair failed and we were unable to recover it. 00:30:48.984 [2024-06-07 16:39:15.824059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.984 [2024-06-07 16:39:15.824122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.984 [2024-06-07 16:39:15.824134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.985 [2024-06-07 16:39:15.824139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.985 [2024-06-07 16:39:15.824144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.985 [2024-06-07 16:39:15.824154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.985 qpair failed and we were unable to recover it. 00:30:48.985 [2024-06-07 16:39:15.834066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:48.985 [2024-06-07 16:39:15.834123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:48.985 [2024-06-07 16:39:15.834134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:48.985 [2024-06-07 16:39:15.834140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:48.985 [2024-06-07 16:39:15.834144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:48.985 [2024-06-07 16:39:15.834154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:48.985 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.844066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.844126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.844138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.844143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.844148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.844158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.854137] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.854198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.854210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.854215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.854223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.854234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.864162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.864225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.864237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.864242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.864246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.864256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.874159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.874212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.874224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.874229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.874233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.874244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.884192] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.884241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.884253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.884258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.884263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.884274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.894241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.894327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.894339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.894344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.894349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.894360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.904280] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.904346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.247 [2024-06-07 16:39:15.904358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.247 [2024-06-07 16:39:15.904364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.247 [2024-06-07 16:39:15.904368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.247 [2024-06-07 16:39:15.904379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.247 qpair failed and we were unable to recover it. 00:30:49.247 [2024-06-07 16:39:15.914283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.247 [2024-06-07 16:39:15.914337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.914349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.914355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.914359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.914369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.924333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.924387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.924399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.924408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.924413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.924423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.934352] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.934415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.934427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.934432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.934437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.934447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.944426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.944500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.944512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.944520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.944525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.944535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.954384] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.954444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.954456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.954461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.954466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.954476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.964389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.964447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.964459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.964465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.964469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.964480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.974430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.974493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.974505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.974510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.974516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.974527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.984495] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.984557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.984569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.984575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.984579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.984590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:15.994484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:15.994548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:15.994560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:15.994565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:15.994569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:15.994580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:16.004527] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:16.004578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:16.004590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:16.004595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:16.004599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:16.004610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:16.014614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:16.014672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:16.014684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:16.014689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:16.014694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:16.014704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:16.024538] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:16.024642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:16.024655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:16.024660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:16.024665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:16.024675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:16.034579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:16.034632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:16.034644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.248 [2024-06-07 16:39:16.034652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.248 [2024-06-07 16:39:16.034656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.248 [2024-06-07 16:39:16.034666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.248 qpair failed and we were unable to recover it. 00:30:49.248 [2024-06-07 16:39:16.044639] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.248 [2024-06-07 16:39:16.044738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.248 [2024-06-07 16:39:16.044750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.249 [2024-06-07 16:39:16.044756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.249 [2024-06-07 16:39:16.044760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.249 [2024-06-07 16:39:16.044771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.249 qpair failed and we were unable to recover it. 00:30:49.249 [2024-06-07 16:39:16.054751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.249 [2024-06-07 16:39:16.054832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.249 [2024-06-07 16:39:16.054844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.249 [2024-06-07 16:39:16.054849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.249 [2024-06-07 16:39:16.054854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.249 [2024-06-07 16:39:16.054865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.249 qpair failed and we were unable to recover it. 00:30:49.249 [2024-06-07 16:39:16.064727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.249 [2024-06-07 16:39:16.064789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.249 [2024-06-07 16:39:16.064801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.249 [2024-06-07 16:39:16.064807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.249 [2024-06-07 16:39:16.064811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.249 [2024-06-07 16:39:16.064821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.249 qpair failed and we were unable to recover it. 00:30:49.249 [2024-06-07 16:39:16.074698] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.249 [2024-06-07 16:39:16.074756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.249 [2024-06-07 16:39:16.074768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.249 [2024-06-07 16:39:16.074774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.249 [2024-06-07 16:39:16.074778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.249 [2024-06-07 16:39:16.074789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.249 qpair failed and we were unable to recover it. 00:30:49.249 [2024-06-07 16:39:16.084736] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.249 [2024-06-07 16:39:16.084790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.249 [2024-06-07 16:39:16.084802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.249 [2024-06-07 16:39:16.084808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.249 [2024-06-07 16:39:16.084813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.249 [2024-06-07 16:39:16.084823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.249 qpair failed and we were unable to recover it. 00:30:49.249 [2024-06-07 16:39:16.094832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.249 [2024-06-07 16:39:16.094906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.249 [2024-06-07 16:39:16.094917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.249 [2024-06-07 16:39:16.094923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.249 [2024-06-07 16:39:16.094927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.249 [2024-06-07 16:39:16.094937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.249 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.104949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.105017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.105029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.105034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.105039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.105049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.114785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.114839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.114851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.114857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.114861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.114872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.124892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.124956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.124971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.124976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.124981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.124992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.134947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.135010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.135022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.135027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.135032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.135042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.144966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.145028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.145039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.145045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.145050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.145060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.154962] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.155022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.155040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.155046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.155051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.155065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.165100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.165192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.165206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.165211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.165216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.165231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.175048] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.175110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.175129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.175135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.175140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.175153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.185055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.185121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.185134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.185140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.185145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.185156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.195074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.195152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.195171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.195177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.195182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.195196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.205070] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.512 [2024-06-07 16:39:16.205131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.512 [2024-06-07 16:39:16.205149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.512 [2024-06-07 16:39:16.205156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.512 [2024-06-07 16:39:16.205160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.512 [2024-06-07 16:39:16.205174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.512 qpair failed and we were unable to recover it. 00:30:49.512 [2024-06-07 16:39:16.215143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.215209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.215225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.215231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.215236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.215247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.225175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.225238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.225250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.225255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.225259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.225270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.235140] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.235199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.235218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.235224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.235229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.235243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.245178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.245235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.245249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.245254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.245259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.245270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.255142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.255200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.255212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.255218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.255226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.255237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.265288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.265351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.265364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.265371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.265377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.265389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.275271] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.275323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.275336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.275341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.275345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.275356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.285173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.285225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.285237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.285242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.285247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.285258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.295363] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.295423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.295435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.295441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.295445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.295456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.305373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.305442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.305455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.305460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.305465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.305475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.315259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.315314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.315326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.315331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.315335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.315346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.325398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.325496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.325508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.325514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.325518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.325529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.335481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.335541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.335554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.335559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.513 [2024-06-07 16:39:16.335563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.513 [2024-06-07 16:39:16.335574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.513 qpair failed and we were unable to recover it. 00:30:49.513 [2024-06-07 16:39:16.345501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.513 [2024-06-07 16:39:16.345564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.513 [2024-06-07 16:39:16.345576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.513 [2024-06-07 16:39:16.345584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.514 [2024-06-07 16:39:16.345589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.514 [2024-06-07 16:39:16.345599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.514 qpair failed and we were unable to recover it. 00:30:49.514 [2024-06-07 16:39:16.355492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.514 [2024-06-07 16:39:16.355587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.514 [2024-06-07 16:39:16.355599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.514 [2024-06-07 16:39:16.355604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.514 [2024-06-07 16:39:16.355609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.514 [2024-06-07 16:39:16.355620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.514 qpair failed and we were unable to recover it. 00:30:49.776 [2024-06-07 16:39:16.365544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.776 [2024-06-07 16:39:16.365639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.776 [2024-06-07 16:39:16.365651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.776 [2024-06-07 16:39:16.365657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.776 [2024-06-07 16:39:16.365661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.776 [2024-06-07 16:39:16.365672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.776 qpair failed and we were unable to recover it. 00:30:49.776 [2024-06-07 16:39:16.375592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.776 [2024-06-07 16:39:16.375653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.776 [2024-06-07 16:39:16.375666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.776 [2024-06-07 16:39:16.375672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.776 [2024-06-07 16:39:16.375677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.776 [2024-06-07 16:39:16.375691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.385618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.385679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.385692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.385697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.385702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.385713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.395598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.395657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.395670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.395675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.395679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.395690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.405648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.405700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.405712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.405717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.405722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.405733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.415715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.415797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.415810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.415815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.415819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.415829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.425727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.425789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.425801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.425806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.425810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.425821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.435695] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.435754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.435766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.435778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.435782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.435793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.445721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.445781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.445796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.445802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.445808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.445820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.455794] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.455856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.455869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.455874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.455878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.455889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.465822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.465918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.465930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.465935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.465940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.465951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.475773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.475824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.475836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.475841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.475845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.475856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.485830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.485890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.485902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.485907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.485912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.485922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.495894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.495954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.495966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.495971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.495975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.495986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.505943] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.777 [2024-06-07 16:39:16.506007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.777 [2024-06-07 16:39:16.506019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.777 [2024-06-07 16:39:16.506024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.777 [2024-06-07 16:39:16.506028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.777 [2024-06-07 16:39:16.506039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.777 qpair failed and we were unable to recover it. 00:30:49.777 [2024-06-07 16:39:16.515911] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.515965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.515977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.515983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.515987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.515998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.525944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.525996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.526011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.526016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.526021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.526031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.536005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.536065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.536077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.536083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.536087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.536097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.546019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.546087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.546105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.546111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.546117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.546130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.556030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.556093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.556112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.556118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.556123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.556137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.566060] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.566119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.566132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.566138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.566142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.566157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.576124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.576198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.576217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.576223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.576228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.576242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.586037] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.586105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.586124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.586131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.586136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.586150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.596015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.596113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.596128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.596133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.596138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.596150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.606166] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.606217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.606229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.606235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.606239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.606250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.616185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.616243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.616259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.616265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.616269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.616280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:49.778 [2024-06-07 16:39:16.626249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:49.778 [2024-06-07 16:39:16.626311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:49.778 [2024-06-07 16:39:16.626324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:49.778 [2024-06-07 16:39:16.626329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:49.778 [2024-06-07 16:39:16.626333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:49.778 [2024-06-07 16:39:16.626345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.778 qpair failed and we were unable to recover it. 00:30:50.040 [2024-06-07 16:39:16.636224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.040 [2024-06-07 16:39:16.636277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.040 [2024-06-07 16:39:16.636290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.040 [2024-06-07 16:39:16.636295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.040 [2024-06-07 16:39:16.636299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.040 [2024-06-07 16:39:16.636310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.040 qpair failed and we were unable to recover it. 00:30:50.040 [2024-06-07 16:39:16.646275] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.646332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.646344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.646350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.646354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.646365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.656346] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.656406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.656419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.656424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.656432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.656443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.666380] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.666448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.666460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.666466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.666470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.666481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.676370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.676427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.676440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.676445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.676449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.676460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.686368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.686431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.686443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.686448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.686453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.686464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.696461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.696525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.696537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.696542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.696546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.696557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.706488] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.706561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.706573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.706578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.706583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.706593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.716463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.716521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.716533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.716538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.716543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.716554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.726514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.726570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.726582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.726587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.726592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.726603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.736558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.736622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.736634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.736639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.736644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.736654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.746668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.746738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.746750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.746755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.746762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.041 [2024-06-07 16:39:16.746773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.041 qpair failed and we were unable to recover it. 00:30:50.041 [2024-06-07 16:39:16.756596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.041 [2024-06-07 16:39:16.756697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.041 [2024-06-07 16:39:16.756710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.041 [2024-06-07 16:39:16.756715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.041 [2024-06-07 16:39:16.756720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.756730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.766609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.766666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.766678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.766683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.766688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.766698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.776696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.776759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.776771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.776776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.776781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.776791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.786675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.786741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.786754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.786759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.786764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.786774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.796712] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.796849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.796862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.796867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.796872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.796882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.806726] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.806779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.806791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.806796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.806801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.806812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.816887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.816960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.816972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.816977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.816982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.816992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.826803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.826873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.826885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.826890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.826895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.826905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.836814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.836867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.836879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.836887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.836891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.836902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.846822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.846878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.846890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.846895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.846899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.846910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.856922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.856987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.856999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.857004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.857009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.042 [2024-06-07 16:39:16.857020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.042 qpair failed and we were unable to recover it. 00:30:50.042 [2024-06-07 16:39:16.867010] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.042 [2024-06-07 16:39:16.867074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.042 [2024-06-07 16:39:16.867086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.042 [2024-06-07 16:39:16.867090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.042 [2024-06-07 16:39:16.867095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.043 [2024-06-07 16:39:16.867106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.043 qpair failed and we were unable to recover it. 00:30:50.043 [2024-06-07 16:39:16.876885] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.043 [2024-06-07 16:39:16.876943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.043 [2024-06-07 16:39:16.876961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.043 [2024-06-07 16:39:16.876967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.043 [2024-06-07 16:39:16.876972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.043 [2024-06-07 16:39:16.876986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.043 qpair failed and we were unable to recover it. 00:30:50.043 [2024-06-07 16:39:16.886960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.043 [2024-06-07 16:39:16.887040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.043 [2024-06-07 16:39:16.887054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.043 [2024-06-07 16:39:16.887060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.043 [2024-06-07 16:39:16.887064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.043 [2024-06-07 16:39:16.887076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.043 qpair failed and we were unable to recover it. 00:30:50.305 [2024-06-07 16:39:16.897001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.305 [2024-06-07 16:39:16.897101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.305 [2024-06-07 16:39:16.897114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.305 [2024-06-07 16:39:16.897119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.305 [2024-06-07 16:39:16.897124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.305 [2024-06-07 16:39:16.897134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.305 qpair failed and we were unable to recover it. 00:30:50.305 [2024-06-07 16:39:16.906900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.305 [2024-06-07 16:39:16.906965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.305 [2024-06-07 16:39:16.906977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.305 [2024-06-07 16:39:16.906983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.305 [2024-06-07 16:39:16.906987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.305 [2024-06-07 16:39:16.906998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.305 qpair failed and we were unable to recover it. 00:30:50.305 [2024-06-07 16:39:16.916987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.305 [2024-06-07 16:39:16.917059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.305 [2024-06-07 16:39:16.917071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.305 [2024-06-07 16:39:16.917076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.305 [2024-06-07 16:39:16.917081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.305 [2024-06-07 16:39:16.917091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.305 qpair failed and we were unable to recover it. 00:30:50.305 [2024-06-07 16:39:16.927041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.927106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.927129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.927135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.927140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.927154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.937122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.937186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.937205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.937211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.937216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.937231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.947126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.947189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.947203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.947209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.947214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.947226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.957128] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.957187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.957206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.957212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.957217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.957230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.967138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.967191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.967204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.967210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.967214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.967231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.977200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.977265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.977278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.977283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.977287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.977298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.987237] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.987308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.987321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.987326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.987330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.987342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:16.997213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:16.997267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:16.997280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:16.997285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:16.997290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:16.997300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:17.007253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:17.007308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:17.007320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:17.007325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:17.007330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:17.007340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:17.017304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:17.017362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:17.017377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:17.017382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:17.017387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:17.017398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:17.027342] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:17.027409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:17.027421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:17.027426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:17.027431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:17.027441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:17.037381] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:17.037439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:17.037452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:17.037457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:17.037462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:17.037472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:17.047369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:17.047428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:17.047441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.306 [2024-06-07 16:39:17.047446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.306 [2024-06-07 16:39:17.047451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.306 [2024-06-07 16:39:17.047461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.306 qpair failed and we were unable to recover it. 00:30:50.306 [2024-06-07 16:39:17.057415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.306 [2024-06-07 16:39:17.057476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.306 [2024-06-07 16:39:17.057488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.057493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.057498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.057512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.067550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.067625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.067637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.067642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.067647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.067658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.077437] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.077497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.077509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.077515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.077519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.077530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.087462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.087515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.087527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.087533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.087537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.087548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.097555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.097618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.097630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.097635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.097639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.097650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.107478] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.107552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.107564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.107569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.107574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.107584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.117570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.117635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.117647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.117652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.117657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.117667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.127640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.127703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.127714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.127719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.127724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.127734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.137645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.137702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.137713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.137718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.137723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.137733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.307 [2024-06-07 16:39:17.147680] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.307 [2024-06-07 16:39:17.147745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.307 [2024-06-07 16:39:17.147757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.307 [2024-06-07 16:39:17.147762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.307 [2024-06-07 16:39:17.147769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.307 [2024-06-07 16:39:17.147780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.307 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.157661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.157762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.157776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.157782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.157789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.157801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.167615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.167669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.167681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.167687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.167691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.167702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.177771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.177849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.177862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.177867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.177873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.177884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.187772] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.187836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.187848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.187853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.187858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.187869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.197635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.197688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.197700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.197705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.197710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.197720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.207668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.207725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.207737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.207742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.207746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.207757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.217856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.217916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.217929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.217934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.217939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.217949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.227865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.227928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.227940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.227945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.227949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.227960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.237869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.237925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.237936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.237945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.237949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.237960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.247934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.247992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.248004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.248009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.248013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.248024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.257992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.258055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.258066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.258071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.258076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.258086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.267997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.268060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.268072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.268077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.268082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.268092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.570 [2024-06-07 16:39:17.277983] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.570 [2024-06-07 16:39:17.278040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.570 [2024-06-07 16:39:17.278058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.570 [2024-06-07 16:39:17.278065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.570 [2024-06-07 16:39:17.278070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.570 [2024-06-07 16:39:17.278083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.570 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.288019] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.288079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.288097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.288104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.288109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.288122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.298126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.298235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.298249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.298255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.298260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.298271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.308108] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.308175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.308193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.308199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.308205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.308218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.318102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.318158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.318172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.318177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.318182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.318193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.328113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.328175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.328197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.328203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.328208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.328222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.338192] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.338297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.338311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.338316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.338321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.338332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.348208] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.348275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.348289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.348295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.348299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.348312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.358216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.358267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.358280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.358285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.358290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.358301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.368215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.368268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.368280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.368285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.368290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.368304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.378222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.378284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.378297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.378302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.378306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.378317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.388280] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.388343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.388355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.388361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.388365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.388376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.398320] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.398371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.398383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.398388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.398393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.398407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.408317] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.408369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.408380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.571 [2024-06-07 16:39:17.408386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.571 [2024-06-07 16:39:17.408391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.571 [2024-06-07 16:39:17.408405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.571 qpair failed and we were unable to recover it. 00:30:50.571 [2024-06-07 16:39:17.418293] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.571 [2024-06-07 16:39:17.418355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.571 [2024-06-07 16:39:17.418369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.572 [2024-06-07 16:39:17.418375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.572 [2024-06-07 16:39:17.418379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.572 [2024-06-07 16:39:17.418390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.572 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.428498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.428602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.428615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.428620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.428625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.428636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.438308] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.438362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.438374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.438379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.438384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.438394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.448444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.448504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.448516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.448521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.448526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.448536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.458540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.458604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.458615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.458620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.458625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.458639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.468443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.468510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.468522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.468527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.468532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.468542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.478540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.478590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.478602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.478607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.478612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.478622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.488519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.488576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.488588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.488594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.488599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.488609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.498661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.498721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.498733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.498739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.498743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.498754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.508715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.508824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.508837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.508842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.508847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.508858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.518640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.834 [2024-06-07 16:39:17.518694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.834 [2024-06-07 16:39:17.518706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.834 [2024-06-07 16:39:17.518711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.834 [2024-06-07 16:39:17.518716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.834 [2024-06-07 16:39:17.518726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.834 qpair failed and we were unable to recover it. 00:30:50.834 [2024-06-07 16:39:17.528643] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.528695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.528707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.528713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.528717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.528728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.538743] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.538807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.538819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.538824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.538829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.538839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.548763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.548825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.548837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.548842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.548849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.548860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.558750] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.558803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.558815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.558820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.558825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.558835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.568769] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.568828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.568840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.568845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.568850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.568860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.578848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.578906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.578918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.578923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.578928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.578938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.588918] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.588984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.588996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.589002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.589006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.589017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.598845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.598909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.598921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.598926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.598931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.598941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.608887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.608939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.608951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.608956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.608961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.608971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.618954] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.619015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.619026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.619032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.619036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.619046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.628977] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.629041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.629053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.629058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.629063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.629073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.638976] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.639032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.639044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.639052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.639056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.639067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.648999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.649056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.649068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.649073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.835 [2024-06-07 16:39:17.649078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.835 [2024-06-07 16:39:17.649088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.835 qpair failed and we were unable to recover it. 00:30:50.835 [2024-06-07 16:39:17.659034] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.835 [2024-06-07 16:39:17.659102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.835 [2024-06-07 16:39:17.659121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.835 [2024-06-07 16:39:17.659127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.836 [2024-06-07 16:39:17.659132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.836 [2024-06-07 16:39:17.659146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.836 qpair failed and we were unable to recover it. 00:30:50.836 [2024-06-07 16:39:17.669082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.836 [2024-06-07 16:39:17.669149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.836 [2024-06-07 16:39:17.669168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.836 [2024-06-07 16:39:17.669174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.836 [2024-06-07 16:39:17.669179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.836 [2024-06-07 16:39:17.669192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.836 qpair failed and we were unable to recover it. 00:30:50.836 [2024-06-07 16:39:17.679076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:50.836 [2024-06-07 16:39:17.679134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:50.836 [2024-06-07 16:39:17.679152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:50.836 [2024-06-07 16:39:17.679159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:50.836 [2024-06-07 16:39:17.679163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:50.836 [2024-06-07 16:39:17.679177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:50.836 qpair failed and we were unable to recover it. 00:30:51.098 [2024-06-07 16:39:17.689113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.098 [2024-06-07 16:39:17.689175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.098 [2024-06-07 16:39:17.689194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.098 [2024-06-07 16:39:17.689200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.098 [2024-06-07 16:39:17.689205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.098 [2024-06-07 16:39:17.689219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.098 qpair failed and we were unable to recover it. 00:30:51.098 [2024-06-07 16:39:17.699178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.098 [2024-06-07 16:39:17.699242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.098 [2024-06-07 16:39:17.699260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.098 [2024-06-07 16:39:17.699267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.098 [2024-06-07 16:39:17.699272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.098 [2024-06-07 16:39:17.699285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.098 qpair failed and we were unable to recover it. 00:30:51.098 [2024-06-07 16:39:17.709098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.098 [2024-06-07 16:39:17.709167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.098 [2024-06-07 16:39:17.709180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.098 [2024-06-07 16:39:17.709186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.098 [2024-06-07 16:39:17.709190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.098 [2024-06-07 16:39:17.709202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.098 qpair failed and we were unable to recover it. 00:30:51.098 [2024-06-07 16:39:17.719074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.098 [2024-06-07 16:39:17.719182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.098 [2024-06-07 16:39:17.719195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.098 [2024-06-07 16:39:17.719201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.098 [2024-06-07 16:39:17.719205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.098 [2024-06-07 16:39:17.719217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.098 qpair failed and we were unable to recover it. 00:30:51.098 [2024-06-07 16:39:17.729098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.729152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.729164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.729173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.729179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.729190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.739351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.739420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.739432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.739438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.739442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.739453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.749223] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.749322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.749334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.749340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.749344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.749356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.759289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.759341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.759353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.759358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.759363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.759373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.769324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.769381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.769392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.769397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.769405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.769416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.779397] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.779462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.779474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.779480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.779484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.779495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.789419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.789482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.789494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.789499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.789504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.789515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.799398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.799492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.799504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.799509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.799513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.799524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.809424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.809484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.809496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.809502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.809506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.809517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.819457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.819518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.819533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.819538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.819543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.819553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.829509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.829571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.829583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.829588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.829593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.829603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.839537] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.839589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.839601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.839606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.839610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.839621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.849544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.849599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.849611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.849617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.099 [2024-06-07 16:39:17.849621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.099 [2024-06-07 16:39:17.849632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.099 qpair failed and we were unable to recover it. 00:30:51.099 [2024-06-07 16:39:17.859605] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.099 [2024-06-07 16:39:17.859666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.099 [2024-06-07 16:39:17.859678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.099 [2024-06-07 16:39:17.859683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.859688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.859702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.869646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.869718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.869730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.869735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.869740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.869750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.879635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.879689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.879701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.879706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.879711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.879723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.889705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.889762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.889775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.889781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.889785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.889796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.899714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.899815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.899828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.899833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.899838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.899848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.909744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.909830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.909847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.909853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.909858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.909869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.919746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.919800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.919812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.919817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.919821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.919832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.929763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.929821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.929833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.929838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.929843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.929853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.939888] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.939945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.939957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.939963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.939967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.939977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.100 [2024-06-07 16:39:17.949865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.100 [2024-06-07 16:39:17.949931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.100 [2024-06-07 16:39:17.949943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.100 [2024-06-07 16:39:17.949948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.100 [2024-06-07 16:39:17.949959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.100 [2024-06-07 16:39:17.949970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.100 qpair failed and we were unable to recover it. 00:30:51.362 [2024-06-07 16:39:17.959847] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.362 [2024-06-07 16:39:17.959902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.362 [2024-06-07 16:39:17.959913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.362 [2024-06-07 16:39:17.959918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.362 [2024-06-07 16:39:17.959923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.362 [2024-06-07 16:39:17.959933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-06-07 16:39:17.969892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.362 [2024-06-07 16:39:17.969988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.362 [2024-06-07 16:39:17.970001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.362 [2024-06-07 16:39:17.970006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.362 [2024-06-07 16:39:17.970010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.362 [2024-06-07 16:39:17.970021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-06-07 16:39:17.980012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.362 [2024-06-07 16:39:17.980083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.362 [2024-06-07 16:39:17.980101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.362 [2024-06-07 16:39:17.980108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.362 [2024-06-07 16:39:17.980113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.362 [2024-06-07 16:39:17.980126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.362 qpair failed and we were unable to recover it. 00:30:51.362 [2024-06-07 16:39:17.989923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.362 [2024-06-07 16:39:17.989990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.362 [2024-06-07 16:39:17.990003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:17.990009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:17.990014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:17.990026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:17.999944] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.000005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.000019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.000025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.000030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.000041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.010061] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.010136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.010150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.010155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.010160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.010171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.020046] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.020106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.020119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.020124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.020128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.020139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.030076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.030148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.030166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.030172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.030177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.030191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.040058] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.040115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.040134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.040143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.040148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.040162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.049981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.050040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.050058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.050064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.050069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.050083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.060176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.060237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.060250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.060256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.060260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.060272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.070163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.070236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.070255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.070261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.070266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.070280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.080166] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.080231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.080244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.080250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.080254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.080265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.090211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.090278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.090291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.090296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.090300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.090311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.100275] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.100339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.100351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.100356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.100361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.100372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.363 qpair failed and we were unable to recover it. 00:30:51.363 [2024-06-07 16:39:18.110297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.363 [2024-06-07 16:39:18.110363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.363 [2024-06-07 16:39:18.110375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.363 [2024-06-07 16:39:18.110381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.363 [2024-06-07 16:39:18.110385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.363 [2024-06-07 16:39:18.110395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.120285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.120341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.120353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.120358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.120363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.120373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.130314] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.130368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.130379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.130387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.130392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.130408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.140383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.140450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.140462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.140468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.140472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.140483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.150404] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.150471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.150482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.150487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.150492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.150503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.160391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.160451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.160464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.160469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.160474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.160485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.170409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.170463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.170475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.170480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.170485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.170496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.180535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.180619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.180632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.180638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.180642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.180653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.190514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.190577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.190588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.190594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.190598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.190609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.200499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.200556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.200568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.200573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.200577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.200588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.364 [2024-06-07 16:39:18.210531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.364 [2024-06-07 16:39:18.210599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.364 [2024-06-07 16:39:18.210611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.364 [2024-06-07 16:39:18.210617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.364 [2024-06-07 16:39:18.210621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.364 [2024-06-07 16:39:18.210632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.364 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.220644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.220708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.220723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.220728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.220733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.220743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.230612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.230673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.230685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.230690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.230695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.230705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.240609] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.240668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.240680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.240685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.240690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.240700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.250682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.250764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.250776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.250781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.250787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.250797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.260741] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.260801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.260813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.260818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.260822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.260835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.270732] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.270795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.270807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.270813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.270817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.270827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.280717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.280774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.280786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.280791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.280795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.280806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.290751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.290829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.290841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.290846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.627 [2024-06-07 16:39:18.290851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.627 [2024-06-07 16:39:18.290862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.627 qpair failed and we were unable to recover it. 00:30:51.627 [2024-06-07 16:39:18.300821] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.627 [2024-06-07 16:39:18.300887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.627 [2024-06-07 16:39:18.300899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.627 [2024-06-07 16:39:18.300904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.300909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.300920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.310845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.310913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.310927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.310932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.310937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.310948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.320775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.320829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.320841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.320846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.320851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.320862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.330822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.330875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.330887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.330893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.330898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.330908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.340864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.340919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.340930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.340935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.340940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.340951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.350932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.350998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.351011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.351017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.351025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.351036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.360863] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.360914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.360927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.360932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.360936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.360947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.370843] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.370897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.370909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.370915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.370919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.370930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.380984] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.381040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.381053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.381058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.381062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.381073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.391067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.391127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.391139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.391144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.391149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.391159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.401040] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.401121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.401133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.401138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.401143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.401154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.411076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.411152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.411164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.411169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.411173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.411184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.421113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.421167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.421178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.628 [2024-06-07 16:39:18.421183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.628 [2024-06-07 16:39:18.421188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61d8000b90 00:30:51.628 [2024-06-07 16:39:18.421199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:51.628 qpair failed and we were unable to recover it. 00:30:51.628 [2024-06-07 16:39:18.431276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.628 [2024-06-07 16:39:18.431464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.628 [2024-06-07 16:39:18.431528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.629 [2024-06-07 16:39:18.431554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.629 [2024-06-07 16:39:18.431574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61e0000b90 00:30:51.629 [2024-06-07 16:39:18.431628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:51.629 qpair failed and we were unable to recover it. 00:30:51.629 [2024-06-07 16:39:18.441176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:51.629 [2024-06-07 16:39:18.441299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:51.629 [2024-06-07 16:39:18.441335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:51.629 [2024-06-07 16:39:18.441351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:51.629 [2024-06-07 16:39:18.441375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f61e0000b90 00:30:51.629 [2024-06-07 16:39:18.441420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:51.629 qpair failed and we were unable to recover it. 00:30:51.629 [2024-06-07 16:39:18.441575] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:51.629 A controller has encountered a failure and is being reset. 00:30:51.629 [2024-06-07 16:39:18.441690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192ce30 (9): Bad file descriptor 00:30:51.629 Controller properly reset. 00:30:51.629 Initializing NVMe Controllers 00:30:51.629 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:51.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:51.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:51.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:51.629 Initialization complete. Launching workers. 00:30:51.629 Starting thread on core 1 00:30:51.629 Starting thread on core 2 00:30:51.629 Starting thread on core 3 00:30:51.629 Starting thread on core 0 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:51.890 00:30:51.890 real 0m11.334s 00:30:51.890 user 0m20.979s 00:30:51.890 sys 0m3.804s 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.890 ************************************ 00:30:51.890 END TEST nvmf_target_disconnect_tc2 00:30:51.890 ************************************ 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:51.890 rmmod nvme_tcp 00:30:51.890 rmmod nvme_fabrics 00:30:51.890 rmmod nvme_keyring 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3301626 ']' 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3301626 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 3301626 ']' 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 3301626 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3301626 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3301626' 00:30:51.890 killing process with pid 3301626 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 3301626 00:30:51.890 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 3301626 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.151 16:39:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.066 16:39:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:54.066 00:30:54.066 real 0m21.217s 00:30:54.066 user 0m48.567s 00:30:54.066 sys 0m9.479s 00:30:54.066 16:39:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:54.066 16:39:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:54.066 ************************************ 00:30:54.066 END TEST nvmf_target_disconnect 00:30:54.066 ************************************ 00:30:54.066 16:39:20 nvmf_tcp -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:54.066 16:39:20 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:54.066 16:39:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.327 16:39:20 nvmf_tcp -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:54.327 00:30:54.327 real 23m5.990s 00:30:54.327 user 48m13.780s 00:30:54.327 sys 7m20.748s 00:30:54.327 16:39:20 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:54.327 16:39:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.327 ************************************ 00:30:54.327 END TEST nvmf_tcp 00:30:54.327 ************************************ 00:30:54.327 16:39:20 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:54.327 16:39:20 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:54.327 16:39:20 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:54.327 16:39:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:54.327 16:39:20 -- common/autotest_common.sh@10 -- # set +x 00:30:54.327 ************************************ 00:30:54.327 START TEST spdkcli_nvmf_tcp 00:30:54.327 ************************************ 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:54.327 * Looking for test storage... 00:30:54.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:54.327 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3303451 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3303451 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 3303451 ']' 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:54.328 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:54.588 [2024-06-07 16:39:21.209532] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:30:54.588 [2024-06-07 16:39:21.209577] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303451 ] 00:30:54.588 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.588 [2024-06-07 16:39:21.260293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:54.588 [2024-06-07 16:39:21.325785] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.588 [2024-06-07 16:39:21.325788] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.158 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:55.158 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:30:55.158 16:39:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:55.158 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:55.158 16:39:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.419 16:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:55.419 16:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:55.419 16:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:55.419 16:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:55.419 16:39:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.419 16:39:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:55.419 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:55.419 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:55.419 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:55.419 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:55.419 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:55.419 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:55.419 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:55.419 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:55.419 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:55.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:55.419 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:55.419 ' 00:30:57.979 [2024-06-07 16:39:24.351093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.921 [2024-06-07 16:39:25.514761] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:00.834 [2024-06-07 16:39:27.653032] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:02.748 [2024-06-07 16:39:29.486508] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:04.132 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:04.132 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:04.132 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:04.132 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:04.132 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:04.132 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:04.132 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:04.132 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:04.132 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:04.132 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:04.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:04.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:04.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:04.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:04.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:04.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:04.133 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:04.392 16:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.652 16:39:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:04.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:04.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:04.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:04.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:04.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:04.652 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:04.652 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:04.652 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:04.652 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:04.652 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:04.652 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:04.652 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:04.652 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:04.652 ' 00:31:09.939 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:09.939 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:09.939 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:09.939 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:09.939 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:09.939 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:09.939 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:09.939 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:09.939 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:09.939 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:09.939 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:09.939 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:09.939 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:09.939 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 3303451 ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3303451' 00:31:09.939 killing process with pid 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3303451 ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3303451 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 3303451 ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 3303451 00:31:09.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3303451) - No such process 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 3303451 is not found' 00:31:09.939 Process with pid 3303451 is not found 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:09.939 00:31:09.939 real 0m15.546s 00:31:09.939 user 0m32.031s 00:31:09.939 sys 0m0.694s 00:31:09.939 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:09.940 16:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 ************************************ 00:31:09.940 END TEST spdkcli_nvmf_tcp 00:31:09.940 ************************************ 00:31:09.940 16:39:36 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:09.940 16:39:36 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:09.940 16:39:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:09.940 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 ************************************ 00:31:09.940 START TEST nvmf_identify_passthru 00:31:09.940 ************************************ 00:31:09.940 16:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:09.940 * Looking for test storage... 00:31:09.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.940 16:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.940 16:39:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.940 16:39:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.940 16:39:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.940 16:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.940 16:39:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.940 16:39:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.940 16:39:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:09.940 16:39:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.940 16:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.940 16:39:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:09.940 16:39:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.940 16:39:36 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.940 16:39:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:16.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:16.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:16.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:16.540 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.540 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:31:16.801 00:31:16.801 --- 10.0.0.2 ping statistics --- 00:31:16.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.801 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:31:16.801 00:31:16.801 --- 10.0.0.1 ping statistics --- 00:31:16.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.801 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:16.801 16:39:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:31:17.060 16:39:43 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:17.060 16:39:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:17.060 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.630 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:31:17.630 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:17.630 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:17.630 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:17.630 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.891 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:17.891 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:17.891 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:17.891 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.151 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.152 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3310195 00:31:18.152 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:18.152 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:18.152 16:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3310195 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 3310195 ']' 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:18.152 16:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.152 [2024-06-07 16:39:44.815143] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:31:18.152 [2024-06-07 16:39:44.815194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.152 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.152 [2024-06-07 16:39:44.880916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:18.152 [2024-06-07 16:39:44.948537] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.152 [2024-06-07 16:39:44.948572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.152 [2024-06-07 16:39:44.948580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.152 [2024-06-07 16:39:44.948586] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.152 [2024-06-07 16:39:44.948592] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.152 [2024-06-07 16:39:44.948726] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.152 [2024-06-07 16:39:44.948846] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.152 [2024-06-07 16:39:44.949002] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.152 [2024-06-07 16:39:44.949003] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:31:18.723 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:18.723 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:31:18.723 16:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:18.723 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.723 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.723 INFO: Log level set to 20 00:31:18.723 INFO: Requests: 00:31:18.723 { 00:31:18.723 "jsonrpc": "2.0", 00:31:18.723 "method": "nvmf_set_config", 00:31:18.723 "id": 1, 00:31:18.723 "params": { 00:31:18.723 "admin_cmd_passthru": { 00:31:18.723 "identify_ctrlr": true 00:31:18.723 } 00:31:18.723 } 00:31:18.723 } 00:31:18.723 00:31:18.983 INFO: response: 00:31:18.983 { 00:31:18.983 "jsonrpc": "2.0", 00:31:18.983 "id": 1, 00:31:18.983 "result": true 00:31:18.983 } 00:31:18.983 00:31:18.983 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.983 16:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:18.983 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.983 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.983 INFO: Setting log level to 20 00:31:18.983 INFO: Setting log level to 20 00:31:18.983 INFO: Log level set to 20 00:31:18.983 INFO: Log level set to 20 00:31:18.983 INFO: Requests: 00:31:18.983 { 00:31:18.983 "jsonrpc": "2.0", 00:31:18.983 "method": "framework_start_init", 00:31:18.983 "id": 1 00:31:18.983 } 00:31:18.983 00:31:18.983 INFO: Requests: 00:31:18.983 { 00:31:18.983 "jsonrpc": "2.0", 00:31:18.983 "method": "framework_start_init", 00:31:18.983 "id": 1 00:31:18.983 } 00:31:18.983 00:31:18.983 [2024-06-07 16:39:45.649824] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:18.983 INFO: response: 00:31:18.983 { 00:31:18.983 "jsonrpc": "2.0", 00:31:18.983 "id": 1, 00:31:18.983 "result": true 00:31:18.983 } 00:31:18.983 00:31:18.983 INFO: response: 00:31:18.983 { 00:31:18.983 "jsonrpc": "2.0", 00:31:18.983 "id": 1, 00:31:18.983 "result": true 00:31:18.983 } 00:31:18.983 00:31:18.983 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.983 16:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.983 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.984 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.984 INFO: Setting log level to 40 00:31:18.984 INFO: Setting log level to 40 00:31:18.984 INFO: Setting log level to 40 00:31:18.984 [2024-06-07 16:39:45.663058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.984 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.984 16:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:18.984 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:18.984 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.984 16:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:18.984 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.984 16:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.244 Nvme0n1 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.244 [2024-06-07 16:39:46.045706] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.244 [ 00:31:19.244 { 00:31:19.244 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:19.244 "subtype": "Discovery", 00:31:19.244 "listen_addresses": [], 00:31:19.244 "allow_any_host": true, 00:31:19.244 "hosts": [] 00:31:19.244 }, 00:31:19.244 { 00:31:19.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:19.244 "subtype": "NVMe", 00:31:19.244 "listen_addresses": [ 00:31:19.244 { 00:31:19.244 "trtype": "TCP", 00:31:19.244 "adrfam": "IPv4", 00:31:19.244 "traddr": "10.0.0.2", 00:31:19.244 "trsvcid": "4420" 00:31:19.244 } 00:31:19.244 ], 00:31:19.244 "allow_any_host": true, 00:31:19.244 "hosts": [], 00:31:19.244 "serial_number": "SPDK00000000000001", 00:31:19.244 "model_number": "SPDK bdev Controller", 00:31:19.244 "max_namespaces": 1, 00:31:19.244 "min_cntlid": 1, 00:31:19.244 "max_cntlid": 65519, 00:31:19.244 "namespaces": [ 00:31:19.244 { 00:31:19.244 "nsid": 1, 00:31:19.244 "bdev_name": "Nvme0n1", 00:31:19.244 "name": "Nvme0n1", 00:31:19.244 "nguid": "3634473052605487002538450000003E", 00:31:19.244 "uuid": "36344730-5260-5487-0025-38450000003e" 00:31:19.244 } 00:31:19.244 ] 00:31:19.244 } 00:31:19.244 ] 00:31:19.244 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:19.244 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:19.505 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.505 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:31:19.505 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:19.505 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:19.505 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:19.505 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.767 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:19.767 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:31:19.767 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:19.767 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.767 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:19.767 16:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.767 rmmod nvme_tcp 00:31:19.767 rmmod nvme_fabrics 00:31:19.767 rmmod nvme_keyring 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3310195 ']' 00:31:19.767 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3310195 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 3310195 ']' 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 3310195 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:19.767 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3310195 00:31:20.028 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:20.028 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:20.028 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3310195' 00:31:20.028 killing process with pid 3310195 00:31:20.028 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 3310195 00:31:20.028 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 3310195 00:31:20.289 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.289 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.289 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.289 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.289 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.289 16:39:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.289 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:20.289 16:39:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.203 16:39:48 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.203 00:31:22.203 real 0m12.345s 00:31:22.203 user 0m10.008s 00:31:22.203 sys 0m5.865s 00:31:22.204 16:39:48 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:22.204 16:39:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.204 ************************************ 00:31:22.204 END TEST nvmf_identify_passthru 00:31:22.204 ************************************ 00:31:22.204 16:39:49 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:22.204 16:39:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:22.204 16:39:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:22.204 16:39:49 -- common/autotest_common.sh@10 -- # set +x 00:31:22.468 ************************************ 00:31:22.468 START TEST nvmf_dif 00:31:22.468 ************************************ 00:31:22.468 16:39:49 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:22.469 * Looking for test storage... 00:31:22.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.469 16:39:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.469 16:39:49 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.469 16:39:49 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.469 16:39:49 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.469 16:39:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.469 16:39:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.469 16:39:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.469 16:39:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:22.469 16:39:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.469 16:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:22.469 16:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:22.469 16:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:22.469 16:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:22.469 16:39:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.469 16:39:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:22.469 16:39:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.469 16:39:49 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.469 16:39:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.676 16:39:55 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:30.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:30.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:30.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:30.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:30.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:31:30.676 00:31:30.676 --- 10.0.0.2 ping statistics --- 00:31:30.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.676 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:31:30.676 00:31:30.676 --- 10.0.0.1 ping statistics --- 00:31:30.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.676 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:30.676 16:39:56 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:33.224 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:33.224 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:33.224 16:39:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:33.224 16:39:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3316162 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3316162 00:31:33.224 16:39:59 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 3316162 ']' 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:33.224 16:39:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:33.224 [2024-06-07 16:39:59.930573] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:31:33.224 [2024-06-07 16:39:59.930634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.224 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.224 [2024-06-07 16:40:00.003788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.485 [2024-06-07 16:40:00.082567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.485 [2024-06-07 16:40:00.082607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.485 [2024-06-07 16:40:00.082615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.485 [2024-06-07 16:40:00.082622] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.485 [2024-06-07 16:40:00.082627] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.485 [2024-06-07 16:40:00.082649] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:31:34.058 16:40:00 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 16:40:00 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.058 16:40:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:34.058 16:40:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 [2024-06-07 16:40:00.758238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.058 16:40:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 ************************************ 00:31:34.058 START TEST fio_dif_1_default 00:31:34.058 ************************************ 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 bdev_null0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:34.058 [2024-06-07 16:40:00.826533] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:34.058 { 00:31:34.058 "params": { 00:31:34.058 "name": "Nvme$subsystem", 00:31:34.058 "trtype": "$TEST_TRANSPORT", 00:31:34.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.058 "adrfam": "ipv4", 00:31:34.058 "trsvcid": "$NVMF_PORT", 00:31:34.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.058 "hdgst": ${hdgst:-false}, 00:31:34.058 "ddgst": ${ddgst:-false} 00:31:34.058 }, 00:31:34.058 "method": "bdev_nvme_attach_controller" 00:31:34.058 } 00:31:34.058 EOF 00:31:34.058 )") 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:34.058 16:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:34.058 "params": { 00:31:34.058 "name": "Nvme0", 00:31:34.058 "trtype": "tcp", 00:31:34.058 "traddr": "10.0.0.2", 00:31:34.058 "adrfam": "ipv4", 00:31:34.058 "trsvcid": "4420", 00:31:34.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.059 "hdgst": false, 00:31:34.059 "ddgst": false 00:31:34.059 }, 00:31:34.059 "method": "bdev_nvme_attach_controller" 00:31:34.059 }' 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:34.059 16:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.651 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:34.651 fio-3.35 00:31:34.651 Starting 1 thread 00:31:34.651 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.871 00:31:46.871 filename0: (groupid=0, jobs=1): err= 0: pid=3316674: Fri Jun 7 16:40:11 2024 00:31:46.871 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10019msec) 00:31:46.871 slat (nsec): min=5622, max=31272, avg=6385.20, stdev=1367.52 00:31:46.871 clat (usec): min=1049, max=42913, avg=21528.77, stdev=20224.19 00:31:46.871 lat (usec): min=1055, max=42944, avg=21535.16, stdev=20224.18 00:31:46.871 clat percentiles (usec): 00:31:46.871 | 1.00th=[ 1139], 5.00th=[ 1205], 10.00th=[ 1221], 20.00th=[ 1254], 00:31:46.871 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[41681], 60.00th=[41681], 00:31:46.871 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:46.871 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:31:46.871 | 99.99th=[42730] 00:31:46.871 bw ( KiB/s): min= 702, max= 768, per=99.92%, avg=742.30, stdev=30.58, samples=20 00:31:46.871 iops : min= 175, max= 192, avg=185.55, stdev= 7.68, samples=20 00:31:46.871 lat (msec) : 2=49.89%, 50=50.11% 00:31:46.871 cpu : usr=95.48%, sys=4.33%, ctx=12, majf=0, minf=221 00:31:46.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.871 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:46.871 00:31:46.871 Run status group 0 (all jobs): 00:31:46.871 READ: bw=743KiB/s (760kB/s), 743KiB/s-743KiB/s (760kB/s-760kB/s), io=7440KiB (7619kB), run=10019-10019msec 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.871 00:31:46.871 real 0m11.115s 00:31:46.871 user 0m23.749s 00:31:46.871 sys 0m0.737s 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 ************************************ 00:31:46.871 END TEST fio_dif_1_default 00:31:46.871 ************************************ 00:31:46.871 16:40:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:46.871 16:40:11 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:46.871 16:40:11 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:46.871 16:40:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 ************************************ 00:31:46.871 START TEST fio_dif_1_multi_subsystems 00:31:46.871 ************************************ 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 bdev_null0 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.871 16:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.871 [2024-06-07 16:40:12.019296] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:46.871 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.872 bdev_null1 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:46.872 { 00:31:46.872 "params": { 00:31:46.872 "name": "Nvme$subsystem", 00:31:46.872 "trtype": "$TEST_TRANSPORT", 00:31:46.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.872 "adrfam": "ipv4", 00:31:46.872 "trsvcid": "$NVMF_PORT", 00:31:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.872 "hdgst": ${hdgst:-false}, 00:31:46.872 "ddgst": ${ddgst:-false} 00:31:46.872 }, 00:31:46.872 "method": "bdev_nvme_attach_controller" 00:31:46.872 } 00:31:46.872 EOF 00:31:46.872 )") 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:46.872 { 00:31:46.872 "params": { 00:31:46.872 "name": "Nvme$subsystem", 00:31:46.872 "trtype": "$TEST_TRANSPORT", 00:31:46.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.872 "adrfam": "ipv4", 00:31:46.872 "trsvcid": "$NVMF_PORT", 00:31:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.872 "hdgst": ${hdgst:-false}, 00:31:46.872 "ddgst": ${ddgst:-false} 00:31:46.872 }, 00:31:46.872 "method": "bdev_nvme_attach_controller" 00:31:46.872 } 00:31:46.872 EOF 00:31:46.872 )") 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:46.872 "params": { 00:31:46.872 "name": "Nvme0", 00:31:46.872 "trtype": "tcp", 00:31:46.872 "traddr": "10.0.0.2", 00:31:46.872 "adrfam": "ipv4", 00:31:46.872 "trsvcid": "4420", 00:31:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:46.872 "hdgst": false, 00:31:46.872 "ddgst": false 00:31:46.872 }, 00:31:46.872 "method": "bdev_nvme_attach_controller" 00:31:46.872 },{ 00:31:46.872 "params": { 00:31:46.872 "name": "Nvme1", 00:31:46.872 "trtype": "tcp", 00:31:46.872 "traddr": "10.0.0.2", 00:31:46.872 "adrfam": "ipv4", 00:31:46.872 "trsvcid": "4420", 00:31:46.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:46.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:46.872 "hdgst": false, 00:31:46.872 "ddgst": false 00:31:46.872 }, 00:31:46.872 "method": "bdev_nvme_attach_controller" 00:31:46.872 }' 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:46.872 16:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.872 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:46.872 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:46.872 fio-3.35 00:31:46.872 Starting 2 threads 00:31:46.872 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.860 00:31:56.860 filename0: (groupid=0, jobs=1): err= 0: pid=3319095: Fri Jun 7 16:40:23 2024 00:31:56.860 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:31:56.860 slat (nsec): min=5612, max=32360, avg=6721.14, stdev=1628.46 00:31:56.860 clat (usec): min=41847, max=44602, avg=41997.13, stdev=182.14 00:31:56.860 lat (usec): min=41854, max=44635, avg=42003.85, stdev=182.73 00:31:56.860 clat percentiles (usec): 00:31:56.860 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:56.860 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:56.860 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:56.860 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:31:56.860 | 99.99th=[44827] 00:31:56.860 bw ( KiB/s): min= 352, max= 384, per=33.93%, avg=380.80, stdev= 9.85, samples=20 00:31:56.860 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:56.860 lat (msec) : 50=100.00% 00:31:56.860 cpu : usr=97.11%, sys=2.67%, ctx=20, majf=0, minf=161 00:31:56.860 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.860 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.860 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:56.860 filename1: (groupid=0, jobs=1): err= 0: pid=3319096: Fri Jun 7 16:40:23 2024 00:31:56.860 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:31:56.860 slat (nsec): min=5605, max=34163, avg=6768.12, stdev=1468.03 00:31:56.860 clat (usec): min=948, max=43522, avg=21573.30, stdev=20250.54 00:31:56.860 lat (usec): min=953, max=43556, avg=21580.07, stdev=20250.51 00:31:56.860 clat percentiles (usec): 00:31:56.860 | 1.00th=[ 1156], 5.00th=[ 1188], 10.00th=[ 1205], 20.00th=[ 1221], 00:31:56.860 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[41157], 60.00th=[41681], 00:31:56.860 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:56.860 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43779], 00:31:56.860 | 99.99th=[43779] 00:31:56.860 bw ( KiB/s): min= 672, max= 768, per=66.07%, avg=740.80, stdev=33.28, samples=20 00:31:56.860 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:31:56.860 lat (usec) : 1000=0.22% 00:31:56.860 lat (msec) : 2=49.57%, 50=50.22% 00:31:56.860 cpu : usr=97.13%, sys=2.66%, ctx=69, majf=0, minf=87 00:31:56.860 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.860 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.860 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:56.860 00:31:56.860 Run status group 0 (all jobs): 00:31:56.861 READ: bw=1120KiB/s (1147kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10019-10042msec 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 00:31:56.861 real 0m11.538s 00:31:56.861 user 0m34.971s 00:31:56.861 sys 0m0.799s 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 ************************************ 00:31:56.861 END TEST fio_dif_1_multi_subsystems 00:31:56.861 ************************************ 00:31:56.861 16:40:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:56.861 16:40:23 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:56.861 16:40:23 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 ************************************ 00:31:56.861 START TEST fio_dif_rand_params 00:31:56.861 ************************************ 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 bdev_null0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:56.861 [2024-06-07 16:40:23.648788] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:56.861 { 00:31:56.861 "params": { 00:31:56.861 "name": "Nvme$subsystem", 00:31:56.861 "trtype": "$TEST_TRANSPORT", 00:31:56.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.861 "adrfam": "ipv4", 00:31:56.861 "trsvcid": "$NVMF_PORT", 00:31:56.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.861 "hdgst": ${hdgst:-false}, 00:31:56.861 "ddgst": ${ddgst:-false} 00:31:56.861 }, 00:31:56.861 "method": "bdev_nvme_attach_controller" 00:31:56.861 } 00:31:56.861 EOF 00:31:56.861 )") 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:56.861 "params": { 00:31:56.861 "name": "Nvme0", 00:31:56.861 "trtype": "tcp", 00:31:56.861 "traddr": "10.0.0.2", 00:31:56.861 "adrfam": "ipv4", 00:31:56.861 "trsvcid": "4420", 00:31:56.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.861 "hdgst": false, 00:31:56.861 "ddgst": false 00:31:56.861 }, 00:31:56.861 "method": "bdev_nvme_attach_controller" 00:31:56.861 }' 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:56.861 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:57.164 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:57.165 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:57.165 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:57.165 16:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.432 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:57.432 ... 00:31:57.432 fio-3.35 00:31:57.432 Starting 3 threads 00:31:57.432 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.040 00:32:04.040 filename0: (groupid=0, jobs=1): err= 0: pid=3321300: Fri Jun 7 16:40:29 2024 00:32:04.040 read: IOPS=134, BW=16.9MiB/s (17.7MB/s)(84.4MiB/5004msec) 00:32:04.040 slat (nsec): min=5640, max=33401, avg=6334.64, stdev=1392.95 00:32:04.040 clat (usec): min=6895, max=93511, avg=22227.08, stdev=19078.47 00:32:04.040 lat (usec): min=6900, max=93518, avg=22233.42, stdev=19078.57 00:32:04.040 clat percentiles (usec): 00:32:04.040 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9634], 00:32:04.040 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11994], 60.00th=[13304], 00:32:04.040 | 70.00th=[15401], 80.00th=[50070], 90.00th=[52691], 95.00th=[54264], 00:32:04.040 | 99.00th=[57934], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:32:04.040 | 99.99th=[93848] 00:32:04.040 bw ( KiB/s): min=13312, max=25344, per=24.96%, avg=17228.80, stdev=4222.15, samples=10 00:32:04.040 iops : min= 104, max= 198, avg=134.60, stdev=32.99, samples=10 00:32:04.040 lat (msec) : 10=25.19%, 20=48.59%, 50=4.89%, 100=21.33% 00:32:04.040 cpu : usr=96.26%, sys=3.52%, ctx=6, majf=0, minf=91 00:32:04.040 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.040 issued rwts: total=675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.040 filename0: (groupid=0, jobs=1): err= 0: pid=3321301: Fri Jun 7 16:40:29 2024 00:32:04.040 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(153MiB/5024msec) 00:32:04.040 slat (nsec): min=5642, max=31414, avg=8350.21, stdev=1835.08 00:32:04.040 clat (usec): min=5101, max=93124, avg=12336.59, stdev=12044.40 00:32:04.040 lat (usec): min=5108, max=93134, avg=12344.94, stdev=12044.59 00:32:04.040 clat percentiles (usec): 00:32:04.040 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7242], 00:32:04.040 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:32:04.040 | 70.00th=[10159], 80.00th=[11338], 90.00th=[13042], 95.00th=[49546], 00:32:04.040 | 99.00th=[52691], 99.50th=[53216], 99.90th=[91751], 99.95th=[92799], 00:32:04.040 | 99.99th=[92799] 00:32:04.040 bw ( KiB/s): min=22272, max=38400, per=45.14%, avg=31155.20, stdev=5210.30, samples=10 00:32:04.040 iops : min= 174, max= 300, avg=243.40, stdev=40.71, samples=10 00:32:04.040 lat (msec) : 10=67.95%, 20=24.02%, 50=4.02%, 100=4.02% 00:32:04.040 cpu : usr=95.48%, sys=4.26%, ctx=12, majf=0, minf=104 00:32:04.040 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.040 issued rwts: total=1220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.040 filename0: (groupid=0, jobs=1): err= 0: pid=3321302: Fri Jun 7 16:40:29 2024 00:32:04.040 read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(102MiB/5033msec) 00:32:04.040 slat (nsec): min=5659, max=35551, avg=8502.01, stdev=1646.81 00:32:04.040 clat (usec): min=6134, max=92468, avg=18416.00, stdev=17175.38 00:32:04.040 lat (usec): min=6142, max=92477, avg=18424.50, stdev=17175.35 00:32:04.040 clat percentiles (usec): 00:32:04.040 | 1.00th=[ 7373], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9634], 00:32:04.040 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:32:04.040 | 70.00th=[12911], 80.00th=[14353], 90.00th=[51119], 95.00th=[52167], 00:32:04.040 | 99.00th=[91751], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:32:04.040 | 99.99th=[92799] 00:32:04.040 bw ( KiB/s): min=14336, max=27392, per=30.26%, avg=20889.60, stdev=4869.54, samples=10 00:32:04.040 iops : min= 112, max= 214, avg=163.20, stdev=38.04, samples=10 00:32:04.040 lat (msec) : 10=28.45%, 20=54.21%, 50=4.03%, 100=13.31% 00:32:04.040 cpu : usr=96.09%, sys=3.70%, ctx=11, majf=0, minf=85 00:32:04.040 IO depths : 1=3.3%, 2=96.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.040 issued rwts: total=819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.040 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.040 00:32:04.040 Run status group 0 (all jobs): 00:32:04.040 READ: bw=67.4MiB/s (70.7MB/s), 16.9MiB/s-30.4MiB/s (17.7MB/s-31.8MB/s), io=339MiB (356MB), run=5004-5033msec 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.040 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.040 bdev_null0 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 [2024-06-07 16:40:29.854488] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 bdev_null1 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 bdev_null2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:04.041 { 00:32:04.041 "params": { 00:32:04.041 "name": "Nvme$subsystem", 00:32:04.041 "trtype": "$TEST_TRANSPORT", 00:32:04.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.041 "adrfam": "ipv4", 00:32:04.041 "trsvcid": "$NVMF_PORT", 00:32:04.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.041 "hdgst": ${hdgst:-false}, 00:32:04.041 "ddgst": ${ddgst:-false} 00:32:04.041 }, 00:32:04.041 "method": "bdev_nvme_attach_controller" 00:32:04.041 } 00:32:04.041 EOF 00:32:04.041 )") 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:04.041 { 00:32:04.041 "params": { 00:32:04.041 "name": "Nvme$subsystem", 00:32:04.041 "trtype": "$TEST_TRANSPORT", 00:32:04.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.041 "adrfam": "ipv4", 00:32:04.041 "trsvcid": "$NVMF_PORT", 00:32:04.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.041 "hdgst": ${hdgst:-false}, 00:32:04.041 "ddgst": ${ddgst:-false} 00:32:04.041 }, 00:32:04.041 "method": "bdev_nvme_attach_controller" 00:32:04.041 } 00:32:04.041 EOF 00:32:04.041 )") 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:04.041 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:04.041 { 00:32:04.041 "params": { 00:32:04.041 "name": "Nvme$subsystem", 00:32:04.041 "trtype": "$TEST_TRANSPORT", 00:32:04.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.041 "adrfam": "ipv4", 00:32:04.041 "trsvcid": "$NVMF_PORT", 00:32:04.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.041 "hdgst": ${hdgst:-false}, 00:32:04.041 "ddgst": ${ddgst:-false} 00:32:04.041 }, 00:32:04.041 "method": "bdev_nvme_attach_controller" 00:32:04.041 } 00:32:04.041 EOF 00:32:04.041 )") 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:04.042 "params": { 00:32:04.042 "name": "Nvme0", 00:32:04.042 "trtype": "tcp", 00:32:04.042 "traddr": "10.0.0.2", 00:32:04.042 "adrfam": "ipv4", 00:32:04.042 "trsvcid": "4420", 00:32:04.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.042 "hdgst": false, 00:32:04.042 "ddgst": false 00:32:04.042 }, 00:32:04.042 "method": "bdev_nvme_attach_controller" 00:32:04.042 },{ 00:32:04.042 "params": { 00:32:04.042 "name": "Nvme1", 00:32:04.042 "trtype": "tcp", 00:32:04.042 "traddr": "10.0.0.2", 00:32:04.042 "adrfam": "ipv4", 00:32:04.042 "trsvcid": "4420", 00:32:04.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:04.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:04.042 "hdgst": false, 00:32:04.042 "ddgst": false 00:32:04.042 }, 00:32:04.042 "method": "bdev_nvme_attach_controller" 00:32:04.042 },{ 00:32:04.042 "params": { 00:32:04.042 "name": "Nvme2", 00:32:04.042 "trtype": "tcp", 00:32:04.042 "traddr": "10.0.0.2", 00:32:04.042 "adrfam": "ipv4", 00:32:04.042 "trsvcid": "4420", 00:32:04.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:04.042 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:04.042 "hdgst": false, 00:32:04.042 "ddgst": false 00:32:04.042 }, 00:32:04.042 "method": "bdev_nvme_attach_controller" 00:32:04.042 }' 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:04.042 16:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:04.042 16:40:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:04.042 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:04.042 ... 00:32:04.042 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:04.042 ... 00:32:04.042 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:04.042 ... 00:32:04.042 fio-3.35 00:32:04.042 Starting 24 threads 00:32:04.042 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.272 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322793: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.5MiB/10059msec) 00:32:16.273 slat (nsec): min=5805, max=84826, avg=20239.34, stdev=14258.07 00:32:16.273 clat (usec): min=16251, max=80802, avg=31974.40, stdev=4070.14 00:32:16.273 lat (usec): min=16257, max=80808, avg=31994.64, stdev=4070.60 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[20317], 5.00th=[25822], 10.00th=[31065], 20.00th=[31589], 00:32:16.273 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:16.273 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.273 | 99.00th=[49021], 99.50th=[55313], 99.90th=[81265], 99.95th=[81265], 00:32:16.273 | 99.99th=[81265] 00:32:16.273 bw ( KiB/s): min= 1792, max= 2416, per=4.21%, avg=1991.00, stdev=130.98, samples=20 00:32:16.273 iops : min= 448, max= 604, avg=497.75, stdev=32.74, samples=20 00:32:16.273 lat (msec) : 20=0.60%, 50=98.44%, 100=0.96% 00:32:16.273 cpu : usr=99.07%, sys=0.60%, ctx=62, majf=0, minf=48 00:32:16.273 IO depths : 1=5.3%, 2=10.8%, 4=22.8%, 8=53.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=4994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322794: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.5MiB/10081msec) 00:32:16.273 slat (nsec): min=5820, max=64000, avg=13731.20, stdev=8272.38 00:32:16.273 clat (usec): min=16329, max=99009, avg=32275.18, stdev=5067.86 00:32:16.273 lat (usec): min=16336, max=99036, avg=32288.91, stdev=5068.29 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[19792], 5.00th=[25822], 10.00th=[31065], 20.00th=[31589], 00:32:16.273 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.273 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:32:16.273 | 99.00th=[47973], 99.50th=[52167], 99.90th=[99091], 99.95th=[99091], 00:32:16.273 | 99.99th=[99091] 00:32:16.273 bw ( KiB/s): min= 1920, max= 2128, per=4.20%, avg=1985.60, stdev=66.05, samples=20 00:32:16.273 iops : min= 480, max= 532, avg=496.40, stdev=16.51, samples=20 00:32:16.273 lat (msec) : 20=1.10%, 50=98.25%, 100=0.64% 00:32:16.273 cpu : usr=97.61%, sys=1.39%, ctx=68, majf=0, minf=55 00:32:16.273 IO depths : 1=3.5%, 2=7.7%, 4=21.9%, 8=57.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=4980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322795: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=482, BW=1928KiB/s (1974kB/s)(19.0MiB/10068msec) 00:32:16.273 slat (nsec): min=5805, max=63131, avg=13435.58, stdev=9044.73 00:32:16.273 clat (usec): min=14485, max=99433, avg=33103.61, stdev=6680.51 00:32:16.273 lat (usec): min=14511, max=99440, avg=33117.05, stdev=6680.23 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[18744], 5.00th=[22676], 10.00th=[28443], 20.00th=[31589], 00:32:16.273 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32637], 00:32:16.273 | 70.00th=[32900], 80.00th=[33817], 90.00th=[39584], 95.00th=[45351], 00:32:16.273 | 99.00th=[54264], 99.50th=[58983], 99.90th=[99091], 99.95th=[99091], 00:32:16.273 | 99.99th=[99091] 00:32:16.273 bw ( KiB/s): min= 1718, max= 2032, per=4.09%, avg=1932.50, stdev=77.19, samples=20 00:32:16.273 iops : min= 429, max= 508, avg=483.10, stdev=19.37, samples=20 00:32:16.273 lat (msec) : 20=2.78%, 50=95.28%, 100=1.94% 00:32:16.273 cpu : usr=97.70%, sys=1.34%, ctx=42, majf=0, minf=129 00:32:16.273 IO depths : 1=0.6%, 2=1.5%, 4=9.3%, 8=73.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=91.0%, 8=6.4%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=4853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322796: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.2MiB/10056msec) 00:32:16.273 slat (nsec): min=5796, max=84475, avg=20886.11, stdev=14940.07 00:32:16.273 clat (usec): min=24229, max=98443, avg=32496.69, stdev=3279.26 00:32:16.273 lat (usec): min=24265, max=98450, avg=32517.58, stdev=3277.85 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.273 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:16.273 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.273 | 99.00th=[40109], 99.50th=[56361], 99.90th=[76022], 99.95th=[76022], 00:32:16.273 | 99.99th=[98042] 00:32:16.273 bw ( KiB/s): min= 1779, max= 2048, per=4.15%, avg=1963.60, stdev=80.14, samples=20 00:32:16.273 iops : min= 444, max= 512, avg=490.85, stdev=20.16, samples=20 00:32:16.273 lat (msec) : 50=99.35%, 100=0.65% 00:32:16.273 cpu : usr=99.12%, sys=0.60%, ctx=13, majf=0, minf=49 00:32:16.273 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322797: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=527, BW=2109KiB/s (2159kB/s)(20.6MiB/10016msec) 00:32:16.273 slat (nsec): min=5841, max=65305, avg=9677.38, stdev=5418.97 00:32:16.273 clat (usec): min=3759, max=35942, avg=30268.54, stdev=5157.25 00:32:16.273 lat (usec): min=3771, max=35949, avg=30278.21, stdev=5156.88 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[ 5342], 5.00th=[20317], 10.00th=[22676], 20.00th=[31065], 00:32:16.273 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:16.273 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:32:16.273 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:32:16.273 | 99.99th=[35914] 00:32:16.273 bw ( KiB/s): min= 1920, max= 2688, per=4.45%, avg=2105.60, stdev=196.88, samples=20 00:32:16.273 iops : min= 480, max= 672, avg=526.40, stdev=49.22, samples=20 00:32:16.273 lat (msec) : 4=0.13%, 10=1.69%, 20=2.63%, 50=95.55% 00:32:16.273 cpu : usr=99.14%, sys=0.55%, ctx=50, majf=0, minf=60 00:32:16.273 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322798: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10014msec) 00:32:16.273 slat (nsec): min=5855, max=72589, avg=13704.07, stdev=10079.52 00:32:16.273 clat (usec): min=5132, max=35955, avg=31746.02, stdev=3271.67 00:32:16.273 lat (usec): min=5146, max=35968, avg=31759.73, stdev=3271.26 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[11600], 5.00th=[30540], 10.00th=[31327], 20.00th=[31589], 00:32:16.273 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.273 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:16.273 | 99.00th=[34341], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:32:16.273 | 99.99th=[35914] 00:32:16.273 bw ( KiB/s): min= 1920, max= 2480, per=4.24%, avg=2005.60, stdev=128.66, samples=20 00:32:16.273 iops : min= 480, max= 620, avg=501.40, stdev=32.16, samples=20 00:32:16.273 lat (msec) : 10=0.95%, 20=0.99%, 50=98.05% 00:32:16.273 cpu : usr=99.26%, sys=0.45%, ctx=13, majf=0, minf=61 00:32:16.273 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=5030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322799: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.4MiB/10085msec) 00:32:16.273 slat (nsec): min=5878, max=72373, avg=18662.33, stdev=12704.17 00:32:16.273 clat (usec): min=18204, max=96482, avg=32269.29, stdev=3981.35 00:32:16.273 lat (usec): min=18212, max=96490, avg=32287.95, stdev=3981.82 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[22152], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.273 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:16.273 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:16.273 | 99.00th=[34341], 99.50th=[35914], 99.90th=[95945], 99.95th=[95945], 00:32:16.273 | 99.99th=[96994] 00:32:16.273 bw ( KiB/s): min= 1920, max= 2048, per=4.20%, avg=1984.00, stdev=65.66, samples=20 00:32:16.273 iops : min= 480, max= 512, avg=496.00, stdev=16.42, samples=20 00:32:16.273 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:32:16.273 cpu : usr=99.33%, sys=0.40%, ctx=9, majf=0, minf=66 00:32:16.273 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.273 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.273 filename0: (groupid=0, jobs=1): err= 0: pid=3322800: Fri Jun 7 16:40:41 2024 00:32:16.273 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.4MiB/10086msec) 00:32:16.273 slat (nsec): min=5898, max=87653, avg=16595.66, stdev=14091.81 00:32:16.273 clat (usec): min=19570, max=96626, avg=32398.55, stdev=3854.03 00:32:16.273 lat (usec): min=19585, max=96660, avg=32415.15, stdev=3854.10 00:32:16.273 clat percentiles (usec): 00:32:16.273 | 1.00th=[29754], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.273 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.273 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.273 | 99.00th=[35390], 99.50th=[36963], 99.90th=[95945], 99.95th=[96994], 00:32:16.273 | 99.99th=[96994] 00:32:16.274 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1977.60, stdev=65.33, samples=20 00:32:16.274 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:32:16.274 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:32:16.274 cpu : usr=99.05%, sys=0.58%, ctx=100, majf=0, minf=44 00:32:16.274 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322801: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.4MiB/10088msec) 00:32:16.274 slat (nsec): min=5877, max=74204, avg=17959.03, stdev=12394.85 00:32:16.274 clat (usec): min=18073, max=98163, avg=32271.39, stdev=4005.59 00:32:16.274 lat (usec): min=18083, max=98185, avg=32289.35, stdev=4006.22 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[23200], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.274 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.274 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:16.274 | 99.00th=[34341], 99.50th=[35914], 99.90th=[96994], 99.95th=[96994], 00:32:16.274 | 99.99th=[98042] 00:32:16.274 bw ( KiB/s): min= 1920, max= 2048, per=4.20%, avg=1984.00, stdev=65.66, samples=20 00:32:16.274 iops : min= 480, max= 512, avg=496.00, stdev=16.42, samples=20 00:32:16.274 lat (msec) : 20=0.96%, 50=98.71%, 100=0.32% 00:32:16.274 cpu : usr=97.76%, sys=1.16%, ctx=50, majf=0, minf=56 00:32:16.274 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322802: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=477, BW=1909KiB/s (1954kB/s)(18.8MiB/10060msec) 00:32:16.274 slat (nsec): min=5812, max=71178, avg=14822.86, stdev=10170.13 00:32:16.274 clat (usec): min=16919, max=76624, avg=33429.09, stdev=5302.24 00:32:16.274 lat (usec): min=16926, max=76631, avg=33443.91, stdev=5301.74 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[20579], 5.00th=[26346], 10.00th=[31065], 20.00th=[31589], 00:32:16.274 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:32:16.274 | 70.00th=[33162], 80.00th=[33817], 90.00th=[40109], 95.00th=[43779], 00:32:16.274 | 99.00th=[50594], 99.50th=[53740], 99.90th=[74974], 99.95th=[74974], 00:32:16.274 | 99.99th=[77071] 00:32:16.274 bw ( KiB/s): min= 1774, max= 2064, per=4.05%, avg=1912.50, stdev=104.11, samples=20 00:32:16.274 iops : min= 443, max= 516, avg=478.10, stdev=26.06, samples=20 00:32:16.274 lat (msec) : 20=0.33%, 50=98.44%, 100=1.23% 00:32:16.274 cpu : usr=98.97%, sys=0.68%, ctx=66, majf=0, minf=55 00:32:16.274 IO depths : 1=2.5%, 2=5.0%, 4=13.1%, 8=67.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322803: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=547, BW=2191KiB/s (2244kB/s)(21.4MiB/10018msec) 00:32:16.274 slat (nsec): min=5824, max=64019, avg=12613.71, stdev=8692.51 00:32:16.274 clat (usec): min=4683, max=39335, avg=29109.10, stdev=5677.73 00:32:16.274 lat (usec): min=4708, max=39344, avg=29121.71, stdev=5679.15 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[ 5800], 5.00th=[19006], 10.00th=[20841], 20.00th=[23200], 00:32:16.274 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:32:16.274 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:32:16.274 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[39060], 00:32:16.274 | 99.99th=[39584] 00:32:16.274 bw ( KiB/s): min= 1920, max= 3408, per=4.63%, avg=2188.55, stdev=373.30, samples=20 00:32:16.274 iops : min= 480, max= 852, avg=547.10, stdev=93.34, samples=20 00:32:16.274 lat (msec) : 10=1.68%, 20=4.66%, 50=93.66% 00:32:16.274 cpu : usr=99.03%, sys=0.67%, ctx=65, majf=0, minf=73 00:32:16.274 IO depths : 1=5.2%, 2=10.4%, 4=21.7%, 8=55.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322804: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.5MiB/10085msec) 00:32:16.274 slat (nsec): min=5908, max=66941, avg=16102.90, stdev=10124.23 00:32:16.274 clat (usec): min=15229, max=96395, avg=32194.09, stdev=4045.85 00:32:16.274 lat (usec): min=15241, max=96404, avg=32210.19, stdev=4046.60 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[21890], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:32:16.274 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:32:16.274 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:16.274 | 99.00th=[34866], 99.50th=[35914], 99.90th=[95945], 99.95th=[95945], 00:32:16.274 | 99.99th=[95945] 00:32:16.274 bw ( KiB/s): min= 1920, max= 2048, per=4.21%, avg=1990.40, stdev=65.33, samples=20 00:32:16.274 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:32:16.274 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:32:16.274 cpu : usr=99.12%, sys=0.60%, ctx=9, majf=0, minf=64 00:32:16.274 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322805: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.4MiB/10086msec) 00:32:16.274 slat (nsec): min=5860, max=54654, avg=10665.81, stdev=6505.48 00:32:16.274 clat (usec): min=14152, max=96576, avg=32439.27, stdev=3912.07 00:32:16.274 lat (usec): min=14160, max=96585, avg=32449.93, stdev=3912.88 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:32:16.274 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.274 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.274 | 99.00th=[35390], 99.50th=[35914], 99.90th=[95945], 99.95th=[96994], 00:32:16.274 | 99.99th=[96994] 00:32:16.274 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1977.60, stdev=65.33, samples=20 00:32:16.274 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:32:16.274 lat (msec) : 20=0.12%, 50=99.52%, 100=0.36% 00:32:16.274 cpu : usr=99.08%, sys=0.56%, ctx=73, majf=0, minf=49 00:32:16.274 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322806: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10051msec) 00:32:16.274 slat (nsec): min=5795, max=84207, avg=16502.14, stdev=12504.05 00:32:16.274 clat (usec): min=16319, max=99577, avg=33620.73, stdev=6149.35 00:32:16.274 lat (usec): min=16325, max=99591, avg=33637.24, stdev=6148.95 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[21627], 5.00th=[28181], 10.00th=[31065], 20.00th=[31589], 00:32:16.274 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32637], 00:32:16.274 | 70.00th=[32900], 80.00th=[33817], 90.00th=[40633], 95.00th=[43254], 00:32:16.274 | 99.00th=[54264], 99.50th=[58983], 99.90th=[99091], 99.95th=[99091], 00:32:16.274 | 99.99th=[99091] 00:32:16.274 bw ( KiB/s): min= 1664, max= 2016, per=4.02%, avg=1901.05, stdev=87.09, samples=20 00:32:16.274 iops : min= 416, max= 504, avg=475.25, stdev=21.79, samples=20 00:32:16.274 lat (msec) : 20=0.55%, 50=97.76%, 100=1.70% 00:32:16.274 cpu : usr=98.93%, sys=0.77%, ctx=17, majf=0, minf=76 00:32:16.274 IO depths : 1=1.3%, 2=2.6%, 4=8.4%, 8=74.0%, 16=13.7%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=90.5%, 8=6.1%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322807: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.2MiB/10074msec) 00:32:16.274 slat (nsec): min=5856, max=65464, avg=16284.30, stdev=11050.66 00:32:16.274 clat (usec): min=17618, max=99132, avg=32562.89, stdev=4191.50 00:32:16.274 lat (usec): min=17625, max=99138, avg=32579.17, stdev=4190.97 00:32:16.274 clat percentiles (usec): 00:32:16.274 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.274 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.274 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.274 | 99.00th=[35914], 99.50th=[59507], 99.90th=[99091], 99.95th=[99091], 00:32:16.274 | 99.99th=[99091] 00:32:16.274 bw ( KiB/s): min= 1706, max= 2048, per=4.15%, avg=1960.30, stdev=96.56, samples=20 00:32:16.274 iops : min= 426, max= 512, avg=490.05, stdev=24.21, samples=20 00:32:16.274 lat (msec) : 20=0.04%, 50=99.31%, 100=0.65% 00:32:16.274 cpu : usr=99.03%, sys=0.67%, ctx=53, majf=0, minf=52 00:32:16.274 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:16.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.274 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.274 filename1: (groupid=0, jobs=1): err= 0: pid=3322808: Fri Jun 7 16:40:41 2024 00:32:16.274 read: IOPS=519, BW=2076KiB/s (2126kB/s)(20.5MiB/10111msec) 00:32:16.274 slat (usec): min=5, max=134, avg=14.47, stdev=12.42 00:32:16.274 clat (msec): min=5, max=132, avg=30.65, stdev= 7.93 00:32:16.275 lat (msec): min=5, max=132, avg=30.66, stdev= 7.93 00:32:16.275 clat percentiles (msec): 00:32:16.275 | 1.00th=[ 17], 5.00th=[ 21], 10.00th=[ 22], 20.00th=[ 25], 00:32:16.275 | 30.00th=[ 29], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:32:16.275 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 43], 00:32:16.275 | 99.00th=[ 52], 99.50th=[ 53], 99.90th=[ 133], 99.95th=[ 133], 00:32:16.275 | 99.99th=[ 133] 00:32:16.275 bw ( KiB/s): min= 1872, max= 2356, per=4.43%, avg=2094.60, stdev=142.63, samples=20 00:32:16.275 iops : min= 468, max= 589, avg=523.65, stdev=35.66, samples=20 00:32:16.275 lat (msec) : 10=0.55%, 20=2.84%, 50=95.10%, 100=1.28%, 250=0.23% 00:32:16.275 cpu : usr=97.33%, sys=1.47%, ctx=280, majf=0, minf=47 00:32:16.275 IO depths : 1=2.4%, 2=4.8%, 4=13.5%, 8=68.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322809: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.4MiB/10086msec) 00:32:16.275 slat (nsec): min=5835, max=91658, avg=15015.95, stdev=12770.69 00:32:16.275 clat (usec): min=19369, max=96678, avg=32407.19, stdev=4030.44 00:32:16.275 lat (usec): min=19378, max=96703, avg=32422.20, stdev=4030.94 00:32:16.275 clat percentiles (usec): 00:32:16.275 | 1.00th=[23725], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.275 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.275 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.275 | 99.00th=[40109], 99.50th=[43779], 99.90th=[96994], 99.95th=[96994], 00:32:16.275 | 99.99th=[96994] 00:32:16.275 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1977.60, stdev=63.87, samples=20 00:32:16.275 iops : min= 480, max= 512, avg=494.40, stdev=15.97, samples=20 00:32:16.275 lat (msec) : 20=0.06%, 50=99.62%, 100=0.32% 00:32:16.275 cpu : usr=99.20%, sys=0.52%, ctx=11, majf=0, minf=59 00:32:16.275 IO depths : 1=5.3%, 2=11.4%, 4=24.6%, 8=51.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322810: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=472, BW=1889KiB/s (1934kB/s)(18.5MiB/10054msec) 00:32:16.275 slat (nsec): min=5814, max=75656, avg=17360.12, stdev=11989.68 00:32:16.275 clat (msec): min=16, max=105, avg=33.77, stdev= 6.74 00:32:16.275 lat (msec): min=16, max=105, avg=33.78, stdev= 6.74 00:32:16.275 clat percentiles (msec): 00:32:16.275 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 32], 00:32:16.275 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:32:16.275 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 41], 95.00th=[ 44], 00:32:16.275 | 99.00th=[ 58], 99.50th=[ 79], 99.90th=[ 101], 99.95th=[ 106], 00:32:16.275 | 99.99th=[ 106] 00:32:16.275 bw ( KiB/s): min= 1667, max= 2144, per=4.00%, avg=1892.35, stdev=108.87, samples=20 00:32:16.275 iops : min= 416, max= 536, avg=473.05, stdev=27.30, samples=20 00:32:16.275 lat (msec) : 20=0.95%, 50=97.22%, 100=1.77%, 250=0.06% 00:32:16.275 cpu : usr=97.76%, sys=1.16%, ctx=120, majf=0, minf=76 00:32:16.275 IO depths : 1=1.8%, 2=3.6%, 4=12.1%, 8=70.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=91.2%, 8=4.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=4747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322811: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.4MiB/10058msec) 00:32:16.275 slat (nsec): min=5807, max=74373, avg=15596.78, stdev=11437.01 00:32:16.275 clat (usec): min=13807, max=84443, avg=32278.41, stdev=3563.59 00:32:16.275 lat (usec): min=13813, max=84450, avg=32294.00, stdev=3563.71 00:32:16.275 clat percentiles (usec): 00:32:16.275 | 1.00th=[22152], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:32:16.275 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:16.275 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:32:16.275 | 99.00th=[42730], 99.50th=[53740], 99.90th=[74974], 99.95th=[74974], 00:32:16.275 | 99.99th=[84411] 00:32:16.275 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1976.35, stdev=76.71, samples=20 00:32:16.275 iops : min= 448, max= 512, avg=494.05, stdev=19.15, samples=20 00:32:16.275 lat (msec) : 20=0.44%, 50=98.59%, 100=0.97% 00:32:16.275 cpu : usr=96.01%, sys=2.02%, ctx=79, majf=0, minf=47 00:32:16.275 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322812: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.2MiB/10055msec) 00:32:16.275 slat (nsec): min=5802, max=86238, avg=18307.53, stdev=13190.97 00:32:16.275 clat (usec): min=14968, max=96820, avg=32588.94, stdev=5537.51 00:32:16.275 lat (usec): min=14990, max=96839, avg=32607.25, stdev=5537.73 00:32:16.275 clat percentiles (usec): 00:32:16.275 | 1.00th=[21103], 5.00th=[26084], 10.00th=[31065], 20.00th=[31589], 00:32:16.275 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:32:16.275 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[39060], 00:32:16.275 | 99.00th=[49546], 99.50th=[56361], 99.90th=[96994], 99.95th=[96994], 00:32:16.275 | 99.99th=[96994] 00:32:16.275 bw ( KiB/s): min= 1781, max= 2096, per=4.15%, avg=1960.40, stdev=78.96, samples=20 00:32:16.275 iops : min= 445, max= 524, avg=490.05, stdev=19.85, samples=20 00:32:16.275 lat (msec) : 20=0.79%, 50=98.29%, 100=0.92% 00:32:16.275 cpu : usr=99.07%, sys=0.63%, ctx=8, majf=0, minf=44 00:32:16.275 IO depths : 1=1.8%, 2=3.9%, 4=17.6%, 8=65.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=93.1%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322813: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.2MiB/10049msec) 00:32:16.275 slat (nsec): min=5893, max=87048, avg=21784.91, stdev=14386.32 00:32:16.275 clat (usec): min=30324, max=99515, avg=32528.10, stdev=4112.86 00:32:16.275 lat (usec): min=30335, max=99539, avg=32549.88, stdev=4112.13 00:32:16.275 clat percentiles (usec): 00:32:16.275 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.275 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:16.275 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.275 | 99.00th=[35914], 99.50th=[54789], 99.90th=[99091], 99.95th=[99091], 00:32:16.275 | 99.99th=[99091] 00:32:16.275 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1958.35, stdev=83.88, samples=20 00:32:16.275 iops : min= 448, max= 512, avg=489.55, stdev=21.05, samples=20 00:32:16.275 lat (msec) : 50=99.35%, 100=0.65% 00:32:16.275 cpu : usr=99.01%, sys=0.59%, ctx=148, majf=0, minf=35 00:32:16.275 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322814: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.2MiB/10080msec) 00:32:16.275 slat (nsec): min=5807, max=79146, avg=19408.35, stdev=14575.90 00:32:16.275 clat (usec): min=18803, max=98797, avg=32573.06, stdev=4904.86 00:32:16.275 lat (usec): min=18811, max=98804, avg=32592.47, stdev=4904.28 00:32:16.275 clat percentiles (usec): 00:32:16.275 | 1.00th=[21890], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:32:16.275 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:32:16.275 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35914], 00:32:16.275 | 99.00th=[46924], 99.50th=[49546], 99.90th=[99091], 99.95th=[99091], 00:32:16.275 | 99.99th=[99091] 00:32:16.275 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1964.40, stdev=75.69, samples=20 00:32:16.275 iops : min= 448, max= 512, avg=491.10, stdev=18.92, samples=20 00:32:16.275 lat (msec) : 20=0.30%, 50=99.29%, 100=0.41% 00:32:16.275 cpu : usr=99.10%, sys=0.63%, ctx=12, majf=0, minf=60 00:32:16.275 IO depths : 1=4.3%, 2=9.0%, 4=21.1%, 8=56.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:32:16.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.275 issued rwts: total=4927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.275 filename2: (groupid=0, jobs=1): err= 0: pid=3322815: Fri Jun 7 16:40:41 2024 00:32:16.275 read: IOPS=494, BW=1978KiB/s (2025kB/s)(19.3MiB/10010msec) 00:32:16.275 slat (nsec): min=5848, max=62298, avg=17028.08, stdev=11229.97 00:32:16.275 clat (usec): min=21208, max=51211, avg=32199.95, stdev=1761.50 00:32:16.275 lat (usec): min=21214, max=51221, avg=32216.98, stdev=1761.51 00:32:16.275 clat percentiles (usec): 00:32:16.275 | 1.00th=[24511], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.275 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:16.275 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:32:16.275 | 99.00th=[36963], 99.50th=[42206], 99.90th=[50070], 99.95th=[50070], 00:32:16.275 | 99.99th=[51119] 00:32:16.275 bw ( KiB/s): min= 1920, max= 2096, per=4.20%, avg=1983.16, stdev=69.23, samples=19 00:32:16.275 iops : min= 480, max= 524, avg=495.79, stdev=17.31, samples=19 00:32:16.275 lat (msec) : 50=99.80%, 100=0.20% 00:32:16.275 cpu : usr=99.13%, sys=0.59%, ctx=27, majf=0, minf=52 00:32:16.275 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:16.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.276 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.276 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.276 filename2: (groupid=0, jobs=1): err= 0: pid=3322816: Fri Jun 7 16:40:41 2024 00:32:16.276 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.2MiB/10054msec) 00:32:16.276 slat (nsec): min=5885, max=62680, avg=17367.25, stdev=10950.23 00:32:16.276 clat (usec): min=23350, max=99441, avg=32560.89, stdev=4309.83 00:32:16.276 lat (usec): min=23358, max=99459, avg=32578.26, stdev=4309.76 00:32:16.276 clat percentiles (usec): 00:32:16.276 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:32:16.276 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:16.276 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:16.276 | 99.00th=[35914], 99.50th=[65799], 99.90th=[99091], 99.95th=[99091], 00:32:16.276 | 99.99th=[99091] 00:32:16.276 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1958.30, stdev=83.50, samples=20 00:32:16.276 iops : min= 448, max= 512, avg=489.50, stdev=20.91, samples=20 00:32:16.276 lat (msec) : 50=99.35%, 100=0.65% 00:32:16.276 cpu : usr=98.83%, sys=0.77%, ctx=61, majf=0, minf=69 00:32:16.276 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:16.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.276 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.276 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:16.276 00:32:16.276 Run status group 0 (all jobs): 00:32:16.276 READ: bw=46.1MiB/s (48.4MB/s), 1889KiB/s-2191KiB/s (1934kB/s-2244kB/s), io=467MiB (489MB), run=10010-10111msec 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 bdev_null0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 [2024-06-07 16:40:41.615055] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 bdev_null1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:16.276 { 00:32:16.276 "params": { 00:32:16.276 "name": "Nvme$subsystem", 00:32:16.276 "trtype": "$TEST_TRANSPORT", 00:32:16.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.276 "adrfam": "ipv4", 00:32:16.276 "trsvcid": "$NVMF_PORT", 00:32:16.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.276 "hdgst": ${hdgst:-false}, 00:32:16.276 "ddgst": ${ddgst:-false} 00:32:16.276 }, 00:32:16.276 "method": "bdev_nvme_attach_controller" 00:32:16.276 } 00:32:16.276 EOF 00:32:16.276 )") 00:32:16.276 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:16.277 { 00:32:16.277 "params": { 00:32:16.277 "name": "Nvme$subsystem", 00:32:16.277 "trtype": "$TEST_TRANSPORT", 00:32:16.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.277 "adrfam": "ipv4", 00:32:16.277 "trsvcid": "$NVMF_PORT", 00:32:16.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.277 "hdgst": ${hdgst:-false}, 00:32:16.277 "ddgst": ${ddgst:-false} 00:32:16.277 }, 00:32:16.277 "method": "bdev_nvme_attach_controller" 00:32:16.277 } 00:32:16.277 EOF 00:32:16.277 )") 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:16.277 "params": { 00:32:16.277 "name": "Nvme0", 00:32:16.277 "trtype": "tcp", 00:32:16.277 "traddr": "10.0.0.2", 00:32:16.277 "adrfam": "ipv4", 00:32:16.277 "trsvcid": "4420", 00:32:16.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:16.277 "hdgst": false, 00:32:16.277 "ddgst": false 00:32:16.277 }, 00:32:16.277 "method": "bdev_nvme_attach_controller" 00:32:16.277 },{ 00:32:16.277 "params": { 00:32:16.277 "name": "Nvme1", 00:32:16.277 "trtype": "tcp", 00:32:16.277 "traddr": "10.0.0.2", 00:32:16.277 "adrfam": "ipv4", 00:32:16.277 "trsvcid": "4420", 00:32:16.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:16.277 "hdgst": false, 00:32:16.277 "ddgst": false 00:32:16.277 }, 00:32:16.277 "method": "bdev_nvme_attach_controller" 00:32:16.277 }' 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:16.277 16:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.277 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:16.277 ... 00:32:16.277 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:16.277 ... 00:32:16.277 fio-3.35 00:32:16.277 Starting 4 threads 00:32:16.277 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.556 00:32:21.556 filename0: (groupid=0, jobs=1): err= 0: pid=3325173: Fri Jun 7 16:40:47 2024 00:32:21.556 read: IOPS=2128, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5004msec) 00:32:21.556 slat (nsec): min=8186, max=56070, avg=10210.57, stdev=3373.75 00:32:21.556 clat (usec): min=1150, max=6604, avg=3731.76, stdev=485.10 00:32:21.556 lat (usec): min=1159, max=6620, avg=3741.97, stdev=484.79 00:32:21.556 clat percentiles (usec): 00:32:21.556 | 1.00th=[ 2442], 5.00th=[ 3064], 10.00th=[ 3326], 20.00th=[ 3523], 00:32:21.556 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3752], 00:32:21.556 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4178], 95.00th=[ 4686], 00:32:21.556 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5932], 99.95th=[ 5997], 00:32:21.556 | 99.99th=[ 6587] 00:32:21.556 bw ( KiB/s): min=16560, max=17664, per=25.55%, avg=17027.20, stdev=290.54, samples=10 00:32:21.556 iops : min= 2070, max= 2208, avg=2128.40, stdev=36.32, samples=10 00:32:21.556 lat (msec) : 2=0.48%, 4=86.76%, 10=12.76% 00:32:21.556 cpu : usr=97.50%, sys=2.18%, ctx=34, majf=0, minf=0 00:32:21.556 IO depths : 1=0.2%, 2=0.8%, 4=70.1%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 issued rwts: total=10649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:21.556 filename0: (groupid=0, jobs=1): err= 0: pid=3325174: Fri Jun 7 16:40:47 2024 00:32:21.556 read: IOPS=2065, BW=16.1MiB/s (16.9MB/s)(80.7MiB/5002msec) 00:32:21.556 slat (nsec): min=5620, max=51312, avg=8501.06, stdev=3450.76 00:32:21.556 clat (usec): min=2298, max=45477, avg=3850.65, stdev=1276.74 00:32:21.556 lat (usec): min=2306, max=45509, avg=3859.15, stdev=1276.83 00:32:21.556 clat percentiles (usec): 00:32:21.556 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3392], 20.00th=[ 3523], 00:32:21.556 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3752], 00:32:21.556 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4490], 95.00th=[ 5342], 00:32:21.556 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[45351], 00:32:21.556 | 99.99th=[45351] 00:32:21.556 bw ( KiB/s): min=15216, max=17056, per=24.73%, avg=16481.78, stdev=528.94, samples=9 00:32:21.556 iops : min= 1902, max= 2132, avg=2060.22, stdev=66.12, samples=9 00:32:21.556 lat (msec) : 4=84.53%, 10=15.39%, 50=0.08% 00:32:21.556 cpu : usr=97.26%, sys=2.46%, ctx=10, majf=0, minf=0 00:32:21.556 IO depths : 1=0.2%, 2=0.8%, 4=70.6%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 issued rwts: total=10330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:21.556 filename1: (groupid=0, jobs=1): err= 0: pid=3325175: Fri Jun 7 16:40:47 2024 00:32:21.556 read: IOPS=2098, BW=16.4MiB/s (17.2MB/s)(82.0MiB/5002msec) 00:32:21.556 slat (nsec): min=5615, max=42848, avg=8211.78, stdev=3360.49 00:32:21.556 clat (usec): min=1445, max=7214, avg=3791.14, stdev=510.79 00:32:21.556 lat (usec): min=1451, max=7239, avg=3799.35, stdev=510.77 00:32:21.556 clat percentiles (usec): 00:32:21.556 | 1.00th=[ 2638], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3523], 00:32:21.556 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:32:21.556 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4359], 95.00th=[ 4948], 00:32:21.556 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6259], 99.95th=[ 6718], 00:32:21.556 | 99.99th=[ 7111] 00:32:21.556 bw ( KiB/s): min=16528, max=17104, per=25.21%, avg=16803.56, stdev=204.87, samples=9 00:32:21.556 iops : min= 2066, max= 2138, avg=2100.44, stdev=25.61, samples=9 00:32:21.556 lat (msec) : 2=0.10%, 4=85.18%, 10=14.72% 00:32:21.556 cpu : usr=97.00%, sys=2.72%, ctx=8, majf=0, minf=9 00:32:21.556 IO depths : 1=0.2%, 2=0.8%, 4=69.6%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 issued rwts: total=10495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:21.556 filename1: (groupid=0, jobs=1): err= 0: pid=3325176: Fri Jun 7 16:40:47 2024 00:32:21.556 read: IOPS=2088, BW=16.3MiB/s (17.1MB/s)(82.2MiB/5042msec) 00:32:21.556 slat (nsec): min=5620, max=49175, avg=8329.15, stdev=3435.30 00:32:21.556 clat (usec): min=1832, max=41962, avg=3789.24, stdev=816.83 00:32:21.556 lat (usec): min=1844, max=41969, avg=3797.57, stdev=816.78 00:32:21.556 clat percentiles (usec): 00:32:21.556 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3523], 00:32:21.556 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3752], 00:32:21.556 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4293], 95.00th=[ 4883], 00:32:21.556 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 7832], 99.95th=[ 7898], 00:32:21.556 | 99.99th=[42206] 00:32:21.556 bw ( KiB/s): min=16224, max=17056, per=25.27%, avg=16843.30, stdev=238.02, samples=10 00:32:21.556 iops : min= 2028, max= 2132, avg=2105.40, stdev=29.76, samples=10 00:32:21.556 lat (msec) : 2=0.02%, 4=85.73%, 10=14.22%, 50=0.03% 00:32:21.556 cpu : usr=97.12%, sys=2.60%, ctx=14, majf=0, minf=0 00:32:21.556 IO depths : 1=0.2%, 2=0.9%, 4=70.9%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.556 issued rwts: total=10528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:21.556 00:32:21.556 Run status group 0 (all jobs): 00:32:21.556 READ: bw=65.1MiB/s (68.2MB/s), 16.1MiB/s-16.6MiB/s (16.9MB/s-17.4MB/s), io=328MiB (344MB), run=5002-5042msec 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:21.556 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.557 00:32:21.557 real 0m24.456s 00:32:21.557 user 5m17.931s 00:32:21.557 sys 0m4.014s 00:32:21.557 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 ************************************ 00:32:21.557 END TEST fio_dif_rand_params 00:32:21.557 ************************************ 00:32:21.557 16:40:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:21.557 16:40:48 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:21.557 16:40:48 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 ************************************ 00:32:21.557 START TEST fio_dif_digest 00:32:21.557 ************************************ 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 bdev_null0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:21.557 [2024-06-07 16:40:48.188284] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:21.557 { 00:32:21.557 "params": { 00:32:21.557 "name": "Nvme$subsystem", 00:32:21.557 "trtype": "$TEST_TRANSPORT", 00:32:21.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.557 "adrfam": "ipv4", 00:32:21.557 "trsvcid": "$NVMF_PORT", 00:32:21.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.557 "hdgst": ${hdgst:-false}, 00:32:21.557 "ddgst": ${ddgst:-false} 00:32:21.557 }, 00:32:21.557 "method": "bdev_nvme_attach_controller" 00:32:21.557 } 00:32:21.557 EOF 00:32:21.557 )") 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:21.557 "params": { 00:32:21.557 "name": "Nvme0", 00:32:21.557 "trtype": "tcp", 00:32:21.557 "traddr": "10.0.0.2", 00:32:21.557 "adrfam": "ipv4", 00:32:21.557 "trsvcid": "4420", 00:32:21.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.557 "hdgst": true, 00:32:21.557 "ddgst": true 00:32:21.557 }, 00:32:21.557 "method": "bdev_nvme_attach_controller" 00:32:21.557 }' 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:21.557 16:40:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.817 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:21.817 ... 00:32:21.817 fio-3.35 00:32:21.817 Starting 3 threads 00:32:21.817 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.198 00:32:34.198 filename0: (groupid=0, jobs=1): err= 0: pid=3326518: Fri Jun 7 16:40:59 2024 00:32:34.198 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(267MiB/10047msec) 00:32:34.198 slat (nsec): min=5994, max=33940, avg=6767.80, stdev=941.02 00:32:34.198 clat (usec): min=7487, max=95757, avg=14107.79, stdev=6077.50 00:32:34.198 lat (usec): min=7494, max=95764, avg=14114.56, stdev=6077.51 00:32:34.198 clat percentiles (usec): 00:32:34.198 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10683], 20.00th=[11469], 00:32:34.198 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13698], 60.00th=[14222], 00:32:34.198 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15664], 95.00th=[16319], 00:32:34.198 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[95945], 00:32:34.198 | 99.99th=[95945] 00:32:34.198 bw ( KiB/s): min=23040, max=31744, per=35.60%, avg=27251.20, stdev=2165.03, samples=20 00:32:34.198 iops : min= 180, max= 248, avg=212.90, stdev=16.91, samples=20 00:32:34.198 lat (msec) : 10=4.08%, 20=94.23%, 50=0.05%, 100=1.64% 00:32:34.198 cpu : usr=96.09%, sys=3.66%, ctx=20, majf=0, minf=135 00:32:34.198 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.198 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:34.198 filename0: (groupid=0, jobs=1): err= 0: pid=3326519: Fri Jun 7 16:40:59 2024 00:32:34.198 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10048msec) 00:32:34.198 slat (nsec): min=6004, max=32035, avg=6726.02, stdev=1101.53 00:32:34.198 clat (usec): min=7386, max=96049, avg=15465.85, stdev=9596.05 00:32:34.198 lat (usec): min=7392, max=96056, avg=15472.57, stdev=9596.03 00:32:34.198 clat percentiles (usec): 00:32:34.198 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10945], 20.00th=[12125], 00:32:34.198 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:32:34.198 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15533], 95.00th=[17957], 00:32:34.198 | 99.00th=[55837], 99.50th=[56361], 99.90th=[95945], 99.95th=[95945], 00:32:34.198 | 99.99th=[95945] 00:32:34.198 bw ( KiB/s): min=19712, max=29184, per=32.49%, avg=24870.40, stdev=2336.37, samples=20 00:32:34.198 iops : min= 154, max= 228, avg=194.30, stdev=18.25, samples=20 00:32:34.198 lat (msec) : 10=2.98%, 20=92.03%, 50=0.05%, 100=4.94% 00:32:34.198 cpu : usr=96.23%, sys=3.54%, ctx=25, majf=0, minf=135 00:32:34.198 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.198 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:34.198 filename0: (groupid=0, jobs=1): err= 0: pid=3326520: Fri Jun 7 16:40:59 2024 00:32:34.198 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(242MiB/10045msec) 00:32:34.198 slat (nsec): min=5984, max=31440, avg=6854.09, stdev=1095.05 00:32:34.198 clat (usec): min=7165, max=96784, avg=15588.06, stdev=8638.85 00:32:34.198 lat (usec): min=7171, max=96791, avg=15594.91, stdev=8638.86 00:32:34.198 clat percentiles (usec): 00:32:34.198 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11076], 20.00th=[12387], 00:32:34.198 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:32:34.198 | 70.00th=[15139], 80.00th=[15795], 90.00th=[16581], 95.00th=[17433], 00:32:34.198 | 99.00th=[56886], 99.50th=[56886], 99.90th=[95945], 99.95th=[96994], 00:32:34.198 | 99.99th=[96994] 00:32:34.198 bw ( KiB/s): min=15104, max=28672, per=32.26%, avg=24693.65, stdev=3194.85, samples=20 00:32:34.198 iops : min= 118, max= 224, avg=192.90, stdev=24.96, samples=20 00:32:34.198 lat (msec) : 10=2.48%, 20=93.84%, 100=3.67% 00:32:34.198 cpu : usr=95.55%, sys=3.93%, ctx=545, majf=0, minf=127 00:32:34.198 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:34.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.198 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.198 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:34.198 00:32:34.198 Run status group 0 (all jobs): 00:32:34.198 READ: bw=74.8MiB/s (78.4MB/s), 24.0MiB/s-26.5MiB/s (25.2MB/s-27.8MB/s), io=751MiB (788MB), run=10045-10048msec 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.198 00:32:34.198 real 0m11.231s 00:32:34.198 user 0m41.885s 00:32:34.198 sys 0m1.407s 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:34.198 16:40:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:34.198 ************************************ 00:32:34.198 END TEST fio_dif_digest 00:32:34.198 ************************************ 00:32:34.198 16:40:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:34.198 16:40:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:34.198 rmmod nvme_tcp 00:32:34.198 rmmod nvme_fabrics 00:32:34.198 rmmod nvme_keyring 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3316162 ']' 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3316162 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 3316162 ']' 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 3316162 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3316162 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3316162' 00:32:34.198 killing process with pid 3316162 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@968 -- # kill 3316162 00:32:34.198 16:40:59 nvmf_dif -- common/autotest_common.sh@973 -- # wait 3316162 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:34.198 16:40:59 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:35.580 Waiting for block devices as requested 00:32:35.580 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:35.841 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:35.841 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:35.841 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:36.101 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:36.101 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:36.101 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:36.360 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:36.360 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:36.619 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:36.619 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:36.619 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:36.619 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:36.878 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:36.878 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:36.878 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:36.878 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:37.138 16:41:03 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:37.138 16:41:03 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:37.138 16:41:03 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.138 16:41:03 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:37.138 16:41:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.138 16:41:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:37.138 16:41:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.708 16:41:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:39.708 00:32:39.708 real 1m16.989s 00:32:39.708 user 8m0.708s 00:32:39.708 sys 0m19.016s 00:32:39.708 16:41:06 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:39.708 16:41:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:39.708 ************************************ 00:32:39.708 END TEST nvmf_dif 00:32:39.708 ************************************ 00:32:39.708 16:41:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:39.708 16:41:06 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:39.708 16:41:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:39.708 16:41:06 -- common/autotest_common.sh@10 -- # set +x 00:32:39.708 ************************************ 00:32:39.708 START TEST nvmf_abort_qd_sizes 00:32:39.708 ************************************ 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:39.708 * Looking for test storage... 00:32:39.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.708 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:39.709 16:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.299 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:46.300 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:46.300 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:46.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:46.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.300 16:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.300 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.300 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.300 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.300 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.300 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.300 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.561 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:32:46.561 00:32:46.561 --- 10.0.0.2 ping statistics --- 00:32:46.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.561 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:32:46.561 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:32:46.561 00:32:46.561 --- 10.0.0.1 ping statistics --- 00:32:46.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.561 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:32:46.561 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.561 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:46.561 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:46.561 16:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:49.864 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:49.864 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:50.125 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3335922 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3335922 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 3335922 ']' 00:32:50.386 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.387 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:50.387 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.387 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:50.387 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 [2024-06-07 16:41:17.146635] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:32:50.387 [2024-06-07 16:41:17.146690] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.387 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.387 [2024-06-07 16:41:17.216114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.647 [2024-06-07 16:41:17.290081] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.647 [2024-06-07 16:41:17.290120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.647 [2024-06-07 16:41:17.290127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.647 [2024-06-07 16:41:17.290133] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.647 [2024-06-07 16:41:17.290139] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.647 [2024-06-07 16:41:17.290277] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.647 [2024-06-07 16:41:17.290412] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.647 [2024-06-07 16:41:17.290553] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.647 [2024-06-07 16:41:17.290554] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:51.218 16:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:51.218 ************************************ 00:32:51.218 START TEST spdk_target_abort 00:32:51.218 ************************************ 00:32:51.218 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:32:51.218 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:51.218 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:51.218 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.218 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:51.478 spdk_targetn1 00:32:51.478 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.478 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:51.479 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.479 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:51.479 [2024-06-07 16:41:18.317419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.479 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.479 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:51.479 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.479 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:51.740 [2024-06-07 16:41:18.357677] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:51.740 16:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:51.740 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.740 [2024-06-07 16:41:18.571856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:504 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:51.740 [2024-06-07 16:41:18.571884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:32:52.001 [2024-06-07 16:41:18.627877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2408 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:52.001 [2024-06-07 16:41:18.627897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:55.302 Initializing NVMe Controllers 00:32:55.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:55.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:55.302 Initialization complete. Launching workers. 00:32:55.302 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11358, failed: 2 00:32:55.302 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2710, failed to submit 8650 00:32:55.302 success 786, unsuccess 1924, failed 0 00:32:55.302 16:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:55.302 16:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:55.302 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.302 [2024-06-07 16:41:21.714592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:480 len:8 PRP1 0x200007c48000 PRP2 0x0 00:32:55.302 [2024-06-07 16:41:21.714632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:32:55.302 [2024-06-07 16:41:21.801546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:2488 len:8 PRP1 0x200007c54000 PRP2 0x0 00:32:55.302 [2024-06-07 16:41:21.801572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:57.845 [2024-06-07 16:41:24.394601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:60880 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:32:57.845 [2024-06-07 16:41:24.394642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00bc p:0 m:0 dnr:0 00:32:58.107 Initializing NVMe Controllers 00:32:58.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:58.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:58.107 Initialization complete. Launching workers. 00:32:58.107 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8489, failed: 3 00:32:58.107 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1189, failed to submit 7303 00:32:58.107 success 337, unsuccess 852, failed 0 00:32:58.107 16:41:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:58.107 16:41:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:58.107 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.026 [2024-06-07 16:41:26.450815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:149176 len:8 PRP1 0x200007920000 PRP2 0x0 00:33:00.026 [2024-06-07 16:41:26.450843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.460 Initializing NVMe Controllers 00:33:01.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:01.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:01.460 Initialization complete. Launching workers. 00:33:01.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41892, failed: 1 00:33:01.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2624, failed to submit 39269 00:33:01.460 success 577, unsuccess 2047, failed 0 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.460 16:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3335922 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 3335922 ']' 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 3335922 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3335922 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3335922' 00:33:03.403 killing process with pid 3335922 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 3335922 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 3335922 00:33:03.403 00:33:03.403 real 0m12.185s 00:33:03.403 user 0m49.409s 00:33:03.403 sys 0m1.938s 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:03.403 16:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.403 ************************************ 00:33:03.403 END TEST spdk_target_abort 00:33:03.403 ************************************ 00:33:03.403 16:41:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:03.403 16:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:03.403 16:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:03.403 16:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:03.663 ************************************ 00:33:03.663 START TEST kernel_target_abort 00:33:03.663 ************************************ 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # local ip 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip_candidates=() 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # local -A ip_candidates 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@751 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@753 -- # [[ -z tcp ]] 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@753 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@754 -- # ip=NVMF_INITIATOR_IP 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@756 -- # [[ -z 10.0.0.1 ]] 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@761 -- # echo 10.0.0.1 00:33:03.663 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 nvmf_port=4420 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:03.664 16:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:06.970 Waiting for block devices as requested 00:33:06.970 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:06.970 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:06.970 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:06.970 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:07.232 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:07.232 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:07.232 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:07.232 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:07.492 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:07.493 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:07.753 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:07.753 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:07.754 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:08.015 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:08.015 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:08.015 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:08.015 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:08.276 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:08.536 No valid GPT data, bailing 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # echo SPDK-test 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo 1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ -b /dev/nvme0n1 ]] 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo /dev/nvme0n1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo 1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # echo 10.0.0.1 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # echo tcp 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # echo 4420 00:33:08.536 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # echo ipv4 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:08.537 00:33:08.537 Discovery Log Number of Records 2, Generation counter 2 00:33:08.537 =====Discovery Log Entry 0====== 00:33:08.537 trtype: tcp 00:33:08.537 adrfam: ipv4 00:33:08.537 subtype: current discovery subsystem 00:33:08.537 treq: not specified, sq flow control disable supported 00:33:08.537 portid: 1 00:33:08.537 trsvcid: 4420 00:33:08.537 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:08.537 traddr: 10.0.0.1 00:33:08.537 eflags: none 00:33:08.537 sectype: none 00:33:08.537 =====Discovery Log Entry 1====== 00:33:08.537 trtype: tcp 00:33:08.537 adrfam: ipv4 00:33:08.537 subtype: nvme subsystem 00:33:08.537 treq: not specified, sq flow control disable supported 00:33:08.537 portid: 1 00:33:08.537 trsvcid: 4420 00:33:08.537 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:08.537 traddr: 10.0.0.1 00:33:08.537 eflags: none 00:33:08.537 sectype: none 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:08.537 16:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:08.537 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.838 Initializing NVMe Controllers 00:33:11.838 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:11.838 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:11.838 Initialization complete. Launching workers. 00:33:11.838 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53332, failed: 0 00:33:11.839 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 53332, failed to submit 0 00:33:11.839 success 0, unsuccess 53332, failed 0 00:33:11.839 16:41:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:11.839 16:41:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:11.839 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.141 Initializing NVMe Controllers 00:33:15.141 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:15.141 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:15.141 Initialization complete. Launching workers. 00:33:15.141 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95147, failed: 0 00:33:15.141 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23982, failed to submit 71165 00:33:15.141 success 0, unsuccess 23982, failed 0 00:33:15.141 16:41:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:15.141 16:41:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.141 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.697 Initializing NVMe Controllers 00:33:17.697 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:17.697 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:17.697 Initialization complete. Launching workers. 00:33:17.697 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91370, failed: 0 00:33:17.697 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22846, failed to submit 68524 00:33:17.697 success 0, unsuccess 22846, failed 0 00:33:17.697 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:17.697 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:17.697 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 0 00:33:17.697 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:17.697 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:17.958 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:17.958 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:17.958 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # modules=(/sys/module/nvmet/holders/*) 00:33:17.958 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # modprobe -r nvmet_tcp nvmet 00:33:17.958 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # modprobe -r null_blk 00:33:17.958 16:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:21.265 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:21.265 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:23.181 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:23.181 00:33:23.181 real 0m19.693s 00:33:23.181 user 0m8.516s 00:33:23.181 sys 0m6.006s 00:33:23.182 16:41:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:23.182 16:41:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.182 ************************************ 00:33:23.182 END TEST kernel_target_abort 00:33:23.182 ************************************ 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:23.182 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:23.182 rmmod nvme_tcp 00:33:23.182 rmmod nvme_fabrics 00:33:23.443 rmmod nvme_keyring 00:33:23.443 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:23.443 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:23.443 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:23.443 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3335922 ']' 00:33:23.443 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3335922 00:33:23.443 16:41:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 3335922 ']' 00:33:23.444 16:41:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 3335922 00:33:23.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3335922) - No such process 00:33:23.444 16:41:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 3335922 is not found' 00:33:23.444 Process with pid 3335922 is not found 00:33:23.444 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:23.444 16:41:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:26.784 Waiting for block devices as requested 00:33:26.784 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:26.784 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:26.784 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:26.784 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:27.045 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:27.045 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:27.045 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:27.305 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:27.305 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:27.566 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:27.566 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:27.566 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:27.566 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:27.827 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:27.827 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:27.827 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:28.089 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:28.350 16:41:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.286 16:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:30.286 00:33:30.286 real 0m50.892s 00:33:30.286 user 1m3.171s 00:33:30.286 sys 0m18.313s 00:33:30.286 16:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:30.286 16:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:30.286 ************************************ 00:33:30.286 END TEST nvmf_abort_qd_sizes 00:33:30.286 ************************************ 00:33:30.286 16:41:57 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:30.286 16:41:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:30.286 16:41:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:30.286 16:41:57 -- common/autotest_common.sh@10 -- # set +x 00:33:30.286 ************************************ 00:33:30.286 START TEST keyring_file 00:33:30.286 ************************************ 00:33:30.286 16:41:57 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:30.548 * Looking for test storage... 00:33:30.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:30.548 16:41:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.548 16:41:57 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.548 16:41:57 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.548 16:41:57 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.548 16:41:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.548 16:41:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.548 16:41:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.548 16:41:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:30.548 16:41:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:30.548 16:41:57 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:30.548 16:41:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:30.548 16:41:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:30.548 16:41:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vzGCDfNbqT 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@708 -- # local prefix key digest 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@710 -- # digest=0 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@711 -- # python - 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vzGCDfNbqT 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vzGCDfNbqT 00:33:30.549 16:41:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vzGCDfNbqT 00:33:30.549 16:41:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jNCS3AQ32R 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@708 -- # local prefix key digest 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@710 -- # key=112233445566778899aabbccddeeff00 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@710 -- # digest=0 00:33:30.549 16:41:57 keyring_file -- nvmf/common.sh@711 -- # python - 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jNCS3AQ32R 00:33:30.549 16:41:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jNCS3AQ32R 00:33:30.549 16:41:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jNCS3AQ32R 00:33:30.549 16:41:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=3345950 00:33:30.549 16:41:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3345950 00:33:30.549 16:41:57 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 3345950 ']' 00:33:30.549 16:41:57 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.549 16:41:57 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:30.549 16:41:57 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.549 16:41:57 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:30.549 16:41:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:30.549 16:41:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:30.549 [2024-06-07 16:41:57.397264] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:33:30.549 [2024-06-07 16:41:57.397334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345950 ] 00:33:30.809 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.809 [2024-06-07 16:41:57.461460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.809 [2024-06-07 16:41:57.538119] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:33:31.380 16:41:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.380 [2024-06-07 16:41:58.161361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.380 null0 00:33:31.380 [2024-06-07 16:41:58.193398] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:31.380 [2024-06-07 16:41:58.193652] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:31.380 [2024-06-07 16:41:58.201414] tcp.c:3685:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.380 16:41:58 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.380 [2024-06-07 16:41:58.213445] nvmf_rpc.c: 784:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:31.380 request: 00:33:31.380 { 00:33:31.380 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.380 "secure_channel": false, 00:33:31.380 "listen_address": { 00:33:31.380 "trtype": "tcp", 00:33:31.380 "traddr": "127.0.0.1", 00:33:31.380 "trsvcid": "4420" 00:33:31.380 }, 00:33:31.380 "method": "nvmf_subsystem_add_listener", 00:33:31.380 "req_id": 1 00:33:31.380 } 00:33:31.380 Got JSON-RPC error response 00:33:31.380 response: 00:33:31.380 { 00:33:31.380 "code": -32602, 00:33:31.380 "message": "Invalid parameters" 00:33:31.380 } 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:31.380 16:41:58 keyring_file -- keyring/file.sh@46 -- # bperfpid=3346138 00:33:31.380 16:41:58 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3346138 /var/tmp/bperf.sock 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 3346138 ']' 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:31.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:31.380 16:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.380 16:41:58 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:31.641 [2024-06-07 16:41:58.264558] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:33:31.641 [2024-06-07 16:41:58.264604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346138 ] 00:33:31.641 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.641 [2024-06-07 16:41:58.338094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.641 [2024-06-07 16:41:58.401952] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.211 16:41:59 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:32.211 16:41:59 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:33:32.211 16:41:59 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:32.211 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:32.473 16:41:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jNCS3AQ32R 00:33:32.473 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jNCS3AQ32R 00:33:32.473 16:41:59 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:32.473 16:41:59 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:32.473 16:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.473 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.473 16:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.734 16:41:59 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.vzGCDfNbqT == \/\t\m\p\/\t\m\p\.\v\z\G\C\D\f\N\b\q\T ]] 00:33:32.734 16:41:59 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:32.734 16:41:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:32.734 16:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.734 16:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:32.734 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.994 16:41:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jNCS3AQ32R == \/\t\m\p\/\t\m\p\.\j\N\C\S\3\A\Q\3\2\R ]] 00:33:32.994 16:41:59 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.994 16:41:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:32.994 16:41:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.994 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.995 16:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.255 16:41:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:33.255 16:41:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.255 16:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.516 [2024-06-07 16:42:00.118838] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:33.516 nvme0n1 00:33:33.516 16:42:00 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:33.516 16:42:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.516 16:42:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.516 16:42:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.516 16:42:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.516 16:42:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.777 16:42:00 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:33.777 16:42:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:33.777 16:42:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:33.777 16:42:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.778 16:42:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.778 16:42:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.778 16:42:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.778 16:42:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:33.778 16:42:00 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.778 Running I/O for 1 seconds... 00:33:35.162 00:33:35.162 Latency(us) 00:33:35.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.162 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:35.162 nvme0n1 : 1.01 8167.84 31.91 0.00 0.00 15558.82 8192.00 21626.88 00:33:35.162 =================================================================================================================== 00:33:35.162 Total : 8167.84 31.91 0.00 0.00 15558.82 8192.00 21626.88 00:33:35.162 0 00:33:35.162 16:42:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:35.162 16:42:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.162 16:42:01 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:35.162 16:42:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:35.162 16:42:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.423 16:42:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:35.423 16:42:02 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:35.423 16:42:02 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:35.423 16:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:35.423 [2024-06-07 16:42:02.272440] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:35.423 [2024-06-07 16:42:02.272672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20053b0 (107): Transport endpoint is not connected 00:33:35.423 [2024-06-07 16:42:02.273668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20053b0 (9): Bad file descriptor 00:33:35.423 [2024-06-07 16:42:02.274670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:35.423 [2024-06-07 16:42:02.274677] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:35.423 [2024-06-07 16:42:02.274683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:35.683 request: 00:33:35.683 { 00:33:35.683 "name": "nvme0", 00:33:35.683 "trtype": "tcp", 00:33:35.683 "traddr": "127.0.0.1", 00:33:35.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.683 "adrfam": "ipv4", 00:33:35.683 "trsvcid": "4420", 00:33:35.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.683 "psk": "key1", 00:33:35.683 "method": "bdev_nvme_attach_controller", 00:33:35.683 "req_id": 1 00:33:35.683 } 00:33:35.683 Got JSON-RPC error response 00:33:35.683 response: 00:33:35.683 { 00:33:35.683 "code": -5, 00:33:35.683 "message": "Input/output error" 00:33:35.683 } 00:33:35.683 16:42:02 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:33:35.683 16:42:02 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:35.683 16:42:02 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:35.683 16:42:02 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:35.683 16:42:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.683 16:42:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:35.683 16:42:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.683 16:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:35.944 16:42:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:35.944 16:42:02 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:35.944 16:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:35.944 16:42:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:35.944 16:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:36.205 16:42:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:36.205 16:42:02 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:36.205 16:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.205 16:42:03 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:36.205 16:42:03 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.vzGCDfNbqT 00:33:36.205 16:42:03 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:36.205 16:42:03 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:33:36.205 16:42:03 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:36.205 16:42:03 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:36.467 16:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:36.467 [2024-06-07 16:42:03.199387] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vzGCDfNbqT': 0100660 00:33:36.467 [2024-06-07 16:42:03.199408] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:36.467 request: 00:33:36.467 { 00:33:36.467 "name": "key0", 00:33:36.467 "path": "/tmp/tmp.vzGCDfNbqT", 00:33:36.467 "method": "keyring_file_add_key", 00:33:36.467 "req_id": 1 00:33:36.467 } 00:33:36.467 Got JSON-RPC error response 00:33:36.467 response: 00:33:36.467 { 00:33:36.467 "code": -1, 00:33:36.467 "message": "Operation not permitted" 00:33:36.467 } 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:36.467 16:42:03 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:36.467 16:42:03 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.vzGCDfNbqT 00:33:36.467 16:42:03 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:36.467 16:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vzGCDfNbqT 00:33:36.728 16:42:03 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.vzGCDfNbqT 00:33:36.728 16:42:03 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:36.728 16:42:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.728 16:42:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.728 16:42:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.728 16:42:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.728 16:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.728 16:42:03 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:36.728 16:42:03 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:36.728 16:42:03 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.728 16:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.989 [2024-06-07 16:42:03.680601] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vzGCDfNbqT': No such file or directory 00:33:36.989 [2024-06-07 16:42:03.680614] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:36.989 [2024-06-07 16:42:03.680635] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:36.989 [2024-06-07 16:42:03.680641] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:36.989 [2024-06-07 16:42:03.680646] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:36.989 request: 00:33:36.989 { 00:33:36.989 "name": "nvme0", 00:33:36.989 "trtype": "tcp", 00:33:36.990 "traddr": "127.0.0.1", 00:33:36.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.990 "adrfam": "ipv4", 00:33:36.990 "trsvcid": "4420", 00:33:36.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.990 "psk": "key0", 00:33:36.990 "method": "bdev_nvme_attach_controller", 00:33:36.990 "req_id": 1 00:33:36.990 } 00:33:36.990 Got JSON-RPC error response 00:33:36.990 response: 00:33:36.990 { 00:33:36.990 "code": -19, 00:33:36.990 "message": "No such device" 00:33:36.990 } 00:33:36.990 16:42:03 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:33:36.990 16:42:03 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:36.990 16:42:03 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:36.990 16:42:03 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:36.990 16:42:03 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:36.990 16:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:37.251 16:42:03 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.y890W0CZnw 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:37.251 16:42:03 keyring_file -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:37.251 16:42:03 keyring_file -- nvmf/common.sh@708 -- # local prefix key digest 00:33:37.251 16:42:03 keyring_file -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:33:37.251 16:42:03 keyring_file -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:33:37.251 16:42:03 keyring_file -- nvmf/common.sh@710 -- # digest=0 00:33:37.251 16:42:03 keyring_file -- nvmf/common.sh@711 -- # python - 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.y890W0CZnw 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.y890W0CZnw 00:33:37.251 16:42:03 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.y890W0CZnw 00:33:37.251 16:42:03 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y890W0CZnw 00:33:37.251 16:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y890W0CZnw 00:33:37.251 16:42:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:37.251 16:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:37.515 nvme0n1 00:33:37.515 16:42:04 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:37.515 16:42:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:37.515 16:42:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.515 16:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.515 16:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.515 16:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.778 16:42:04 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:37.778 16:42:04 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:37.778 16:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:37.778 16:42:04 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:37.778 16:42:04 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:37.778 16:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.778 16:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.778 16:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.040 16:42:04 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:38.040 16:42:04 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:38.040 16:42:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:38.040 16:42:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.040 16:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.040 16:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:38.040 16:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.301 16:42:04 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:38.301 16:42:04 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:38.301 16:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:38.301 16:42:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:38.301 16:42:05 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:38.301 16:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.561 16:42:05 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:38.561 16:42:05 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y890W0CZnw 00:33:38.561 16:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y890W0CZnw 00:33:38.823 16:42:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jNCS3AQ32R 00:33:38.823 16:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jNCS3AQ32R 00:33:38.823 16:42:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.823 16:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.085 nvme0n1 00:33:39.085 16:42:05 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:39.085 16:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:39.346 16:42:06 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:39.346 "subsystems": [ 00:33:39.346 { 00:33:39.346 "subsystem": "keyring", 00:33:39.346 "config": [ 00:33:39.346 { 00:33:39.346 "method": "keyring_file_add_key", 00:33:39.346 "params": { 00:33:39.346 "name": "key0", 00:33:39.346 "path": "/tmp/tmp.y890W0CZnw" 00:33:39.346 } 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "method": "keyring_file_add_key", 00:33:39.346 "params": { 00:33:39.346 "name": "key1", 00:33:39.346 "path": "/tmp/tmp.jNCS3AQ32R" 00:33:39.346 } 00:33:39.346 } 00:33:39.346 ] 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "subsystem": "iobuf", 00:33:39.346 "config": [ 00:33:39.346 { 00:33:39.346 "method": "iobuf_set_options", 00:33:39.346 "params": { 00:33:39.346 "small_pool_count": 8192, 00:33:39.346 "large_pool_count": 1024, 00:33:39.346 "small_bufsize": 8192, 00:33:39.346 "large_bufsize": 135168 00:33:39.346 } 00:33:39.346 } 00:33:39.346 ] 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "subsystem": "sock", 00:33:39.346 "config": [ 00:33:39.346 { 00:33:39.346 "method": "sock_set_default_impl", 00:33:39.346 "params": { 00:33:39.346 "impl_name": "posix" 00:33:39.346 } 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "method": "sock_impl_set_options", 00:33:39.346 "params": { 00:33:39.346 "impl_name": "ssl", 00:33:39.346 "recv_buf_size": 4096, 00:33:39.346 "send_buf_size": 4096, 00:33:39.346 "enable_recv_pipe": true, 00:33:39.346 "enable_quickack": false, 00:33:39.346 "enable_placement_id": 0, 00:33:39.346 "enable_zerocopy_send_server": true, 00:33:39.346 "enable_zerocopy_send_client": false, 00:33:39.346 "zerocopy_threshold": 0, 00:33:39.346 "tls_version": 0, 00:33:39.346 "enable_ktls": false, 00:33:39.346 "enable_new_session_tickets": true 00:33:39.346 } 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "method": "sock_impl_set_options", 00:33:39.346 "params": { 00:33:39.346 "impl_name": "posix", 00:33:39.346 "recv_buf_size": 2097152, 00:33:39.346 "send_buf_size": 2097152, 00:33:39.346 "enable_recv_pipe": true, 00:33:39.346 "enable_quickack": false, 00:33:39.346 "enable_placement_id": 0, 00:33:39.346 "enable_zerocopy_send_server": true, 00:33:39.346 "enable_zerocopy_send_client": false, 00:33:39.346 "zerocopy_threshold": 0, 00:33:39.346 "tls_version": 0, 00:33:39.346 "enable_ktls": false, 00:33:39.346 "enable_new_session_tickets": false 00:33:39.346 } 00:33:39.346 } 00:33:39.346 ] 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "subsystem": "vmd", 00:33:39.346 "config": [] 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "subsystem": "accel", 00:33:39.346 "config": [ 00:33:39.346 { 00:33:39.346 "method": "accel_set_options", 00:33:39.346 "params": { 00:33:39.346 "small_cache_size": 128, 00:33:39.346 "large_cache_size": 16, 00:33:39.346 "task_count": 2048, 00:33:39.346 "sequence_count": 2048, 00:33:39.346 "buf_count": 2048 00:33:39.346 } 00:33:39.346 } 00:33:39.346 ] 00:33:39.346 }, 00:33:39.346 { 00:33:39.346 "subsystem": "bdev", 00:33:39.346 "config": [ 00:33:39.346 { 00:33:39.346 "method": "bdev_set_options", 00:33:39.346 "params": { 00:33:39.346 "bdev_io_pool_size": 65535, 00:33:39.346 "bdev_io_cache_size": 256, 00:33:39.346 "bdev_auto_examine": true, 00:33:39.346 "iobuf_small_cache_size": 128, 00:33:39.347 "iobuf_large_cache_size": 16 00:33:39.347 } 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "method": "bdev_raid_set_options", 00:33:39.347 "params": { 00:33:39.347 "process_window_size_kb": 1024 00:33:39.347 } 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "method": "bdev_iscsi_set_options", 00:33:39.347 "params": { 00:33:39.347 "timeout_sec": 30 00:33:39.347 } 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "method": "bdev_nvme_set_options", 00:33:39.347 "params": { 00:33:39.347 "action_on_timeout": "none", 00:33:39.347 "timeout_us": 0, 00:33:39.347 "timeout_admin_us": 0, 00:33:39.347 "keep_alive_timeout_ms": 10000, 00:33:39.347 "arbitration_burst": 0, 00:33:39.347 "low_priority_weight": 0, 00:33:39.347 "medium_priority_weight": 0, 00:33:39.347 "high_priority_weight": 0, 00:33:39.347 "nvme_adminq_poll_period_us": 10000, 00:33:39.347 "nvme_ioq_poll_period_us": 0, 00:33:39.347 "io_queue_requests": 512, 00:33:39.347 "delay_cmd_submit": true, 00:33:39.347 "transport_retry_count": 4, 00:33:39.347 "bdev_retry_count": 3, 00:33:39.347 "transport_ack_timeout": 0, 00:33:39.347 "ctrlr_loss_timeout_sec": 0, 00:33:39.347 "reconnect_delay_sec": 0, 00:33:39.347 "fast_io_fail_timeout_sec": 0, 00:33:39.347 "disable_auto_failback": false, 00:33:39.347 "generate_uuids": false, 00:33:39.347 "transport_tos": 0, 00:33:39.347 "nvme_error_stat": false, 00:33:39.347 "rdma_srq_size": 0, 00:33:39.347 "io_path_stat": false, 00:33:39.347 "allow_accel_sequence": false, 00:33:39.347 "rdma_max_cq_size": 0, 00:33:39.347 "rdma_cm_event_timeout_ms": 0, 00:33:39.347 "dhchap_digests": [ 00:33:39.347 "sha256", 00:33:39.347 "sha384", 00:33:39.347 "sha512" 00:33:39.347 ], 00:33:39.347 "dhchap_dhgroups": [ 00:33:39.347 "null", 00:33:39.347 "ffdhe2048", 00:33:39.347 "ffdhe3072", 00:33:39.347 "ffdhe4096", 00:33:39.347 "ffdhe6144", 00:33:39.347 "ffdhe8192" 00:33:39.347 ] 00:33:39.347 } 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "method": "bdev_nvme_attach_controller", 00:33:39.347 "params": { 00:33:39.347 "name": "nvme0", 00:33:39.347 "trtype": "TCP", 00:33:39.347 "adrfam": "IPv4", 00:33:39.347 "traddr": "127.0.0.1", 00:33:39.347 "trsvcid": "4420", 00:33:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.347 "prchk_reftag": false, 00:33:39.347 "prchk_guard": false, 00:33:39.347 "ctrlr_loss_timeout_sec": 0, 00:33:39.347 "reconnect_delay_sec": 0, 00:33:39.347 "fast_io_fail_timeout_sec": 0, 00:33:39.347 "psk": "key0", 00:33:39.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.347 "hdgst": false, 00:33:39.347 "ddgst": false 00:33:39.347 } 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "method": "bdev_nvme_set_hotplug", 00:33:39.347 "params": { 00:33:39.347 "period_us": 100000, 00:33:39.347 "enable": false 00:33:39.347 } 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "method": "bdev_wait_for_examine" 00:33:39.347 } 00:33:39.347 ] 00:33:39.347 }, 00:33:39.347 { 00:33:39.347 "subsystem": "nbd", 00:33:39.347 "config": [] 00:33:39.347 } 00:33:39.347 ] 00:33:39.347 }' 00:33:39.347 16:42:06 keyring_file -- keyring/file.sh@114 -- # killprocess 3346138 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 3346138 ']' 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@953 -- # kill -0 3346138 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@954 -- # uname 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3346138 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3346138' 00:33:39.347 killing process with pid 3346138 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@968 -- # kill 3346138 00:33:39.347 Received shutdown signal, test time was about 1.000000 seconds 00:33:39.347 00:33:39.347 Latency(us) 00:33:39.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.347 =================================================================================================================== 00:33:39.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.347 16:42:06 keyring_file -- common/autotest_common.sh@973 -- # wait 3346138 00:33:39.609 16:42:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=3347821 00:33:39.609 16:42:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3347821 /var/tmp/bperf.sock 00:33:39.609 16:42:06 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 3347821 ']' 00:33:39.609 16:42:06 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.609 16:42:06 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:39.609 16:42:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:39.609 16:42:06 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.609 16:42:06 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:39.609 16:42:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:39.609 16:42:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:39.609 "subsystems": [ 00:33:39.609 { 00:33:39.609 "subsystem": "keyring", 00:33:39.609 "config": [ 00:33:39.609 { 00:33:39.609 "method": "keyring_file_add_key", 00:33:39.609 "params": { 00:33:39.609 "name": "key0", 00:33:39.609 "path": "/tmp/tmp.y890W0CZnw" 00:33:39.609 } 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "method": "keyring_file_add_key", 00:33:39.609 "params": { 00:33:39.609 "name": "key1", 00:33:39.609 "path": "/tmp/tmp.jNCS3AQ32R" 00:33:39.609 } 00:33:39.609 } 00:33:39.609 ] 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "subsystem": "iobuf", 00:33:39.609 "config": [ 00:33:39.609 { 00:33:39.609 "method": "iobuf_set_options", 00:33:39.609 "params": { 00:33:39.609 "small_pool_count": 8192, 00:33:39.609 "large_pool_count": 1024, 00:33:39.609 "small_bufsize": 8192, 00:33:39.609 "large_bufsize": 135168 00:33:39.609 } 00:33:39.609 } 00:33:39.609 ] 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "subsystem": "sock", 00:33:39.609 "config": [ 00:33:39.609 { 00:33:39.609 "method": "sock_set_default_impl", 00:33:39.609 "params": { 00:33:39.609 "impl_name": "posix" 00:33:39.609 } 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "method": "sock_impl_set_options", 00:33:39.609 "params": { 00:33:39.609 "impl_name": "ssl", 00:33:39.609 "recv_buf_size": 4096, 00:33:39.609 "send_buf_size": 4096, 00:33:39.609 "enable_recv_pipe": true, 00:33:39.609 "enable_quickack": false, 00:33:39.609 "enable_placement_id": 0, 00:33:39.609 "enable_zerocopy_send_server": true, 00:33:39.609 "enable_zerocopy_send_client": false, 00:33:39.609 "zerocopy_threshold": 0, 00:33:39.609 "tls_version": 0, 00:33:39.609 "enable_ktls": false, 00:33:39.609 "enable_new_session_tickets": true 00:33:39.609 } 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "method": "sock_impl_set_options", 00:33:39.609 "params": { 00:33:39.609 "impl_name": "posix", 00:33:39.609 "recv_buf_size": 2097152, 00:33:39.609 "send_buf_size": 2097152, 00:33:39.609 "enable_recv_pipe": true, 00:33:39.609 "enable_quickack": false, 00:33:39.609 "enable_placement_id": 0, 00:33:39.609 "enable_zerocopy_send_server": true, 00:33:39.609 "enable_zerocopy_send_client": false, 00:33:39.609 "zerocopy_threshold": 0, 00:33:39.609 "tls_version": 0, 00:33:39.609 "enable_ktls": false, 00:33:39.609 "enable_new_session_tickets": false 00:33:39.609 } 00:33:39.609 } 00:33:39.609 ] 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "subsystem": "vmd", 00:33:39.609 "config": [] 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "subsystem": "accel", 00:33:39.609 "config": [ 00:33:39.609 { 00:33:39.609 "method": "accel_set_options", 00:33:39.609 "params": { 00:33:39.609 "small_cache_size": 128, 00:33:39.609 "large_cache_size": 16, 00:33:39.609 "task_count": 2048, 00:33:39.609 "sequence_count": 2048, 00:33:39.609 "buf_count": 2048 00:33:39.609 } 00:33:39.609 } 00:33:39.609 ] 00:33:39.609 }, 00:33:39.609 { 00:33:39.609 "subsystem": "bdev", 00:33:39.609 "config": [ 00:33:39.609 { 00:33:39.609 "method": "bdev_set_options", 00:33:39.609 "params": { 00:33:39.609 "bdev_io_pool_size": 65535, 00:33:39.609 "bdev_io_cache_size": 256, 00:33:39.609 "bdev_auto_examine": true, 00:33:39.609 "iobuf_small_cache_size": 128, 00:33:39.609 "iobuf_large_cache_size": 16 00:33:39.609 } 00:33:39.609 }, 00:33:39.610 { 00:33:39.610 "method": "bdev_raid_set_options", 00:33:39.610 "params": { 00:33:39.610 "process_window_size_kb": 1024 00:33:39.610 } 00:33:39.610 }, 00:33:39.610 { 00:33:39.610 "method": "bdev_iscsi_set_options", 00:33:39.610 "params": { 00:33:39.610 "timeout_sec": 30 00:33:39.610 } 00:33:39.610 }, 00:33:39.610 { 00:33:39.610 "method": "bdev_nvme_set_options", 00:33:39.610 "params": { 00:33:39.610 "action_on_timeout": "none", 00:33:39.610 "timeout_us": 0, 00:33:39.610 "timeout_admin_us": 0, 00:33:39.610 "keep_alive_timeout_ms": 10000, 00:33:39.610 "arbitration_burst": 0, 00:33:39.610 "low_priority_weight": 0, 00:33:39.610 "medium_priority_weight": 0, 00:33:39.610 "high_priority_weight": 0, 00:33:39.610 "nvme_adminq_poll_period_us": 10000, 00:33:39.610 "nvme_ioq_poll_period_us": 0, 00:33:39.610 "io_queue_requests": 512, 00:33:39.610 "delay_cmd_submit": true, 00:33:39.610 "transport_retry_count": 4, 00:33:39.610 "bdev_retry_count": 3, 00:33:39.610 "transport_ack_timeout": 0, 00:33:39.610 "ctrlr_loss_timeout_sec": 0, 00:33:39.610 "reconnect_delay_sec": 0, 00:33:39.610 "fast_io_fail_timeout_sec": 0, 00:33:39.610 "disable_auto_failback": false, 00:33:39.610 "generate_uuids": false, 00:33:39.610 "transport_tos": 0, 00:33:39.610 "nvme_error_stat": false, 00:33:39.610 "rdma_srq_size": 0, 00:33:39.610 "io_path_stat": false, 00:33:39.610 "allow_accel_sequence": false, 00:33:39.610 "rdma_max_cq_size": 0, 00:33:39.610 "rdma_cm_event_timeout_ms": 0, 00:33:39.610 "dhchap_digests": [ 00:33:39.610 "sha256", 00:33:39.610 "sha384", 00:33:39.610 "sha512" 00:33:39.610 ], 00:33:39.610 "dhchap_dhgroups": [ 00:33:39.610 "null", 00:33:39.610 "ffdhe2048", 00:33:39.610 "ffdhe3072", 00:33:39.610 "ffdhe4096", 00:33:39.610 "ffdhe6144", 00:33:39.610 "ffdhe8192" 00:33:39.610 ] 00:33:39.610 } 00:33:39.610 }, 00:33:39.610 { 00:33:39.610 "method": "bdev_nvme_attach_controller", 00:33:39.610 "params": { 00:33:39.610 "name": "nvme0", 00:33:39.610 "trtype": "TCP", 00:33:39.610 "adrfam": "IPv4", 00:33:39.610 "traddr": "127.0.0.1", 00:33:39.610 "trsvcid": "4420", 00:33:39.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.610 "prchk_reftag": false, 00:33:39.610 "prchk_guard": false, 00:33:39.610 "ctrlr_loss_timeout_sec": 0, 00:33:39.610 "reconnect_delay_sec": 0, 00:33:39.610 "fast_io_fail_timeout_sec": 0, 00:33:39.610 "psk": "key0", 00:33:39.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.610 "hdgst": false, 00:33:39.610 "ddgst": false 00:33:39.610 } 00:33:39.610 }, 00:33:39.610 { 00:33:39.610 "method": "bdev_nvme_set_hotplug", 00:33:39.610 "params": { 00:33:39.610 "period_us": 100000, 00:33:39.610 "enable": false 00:33:39.610 } 00:33:39.610 }, 00:33:39.610 { 00:33:39.610 "method": "bdev_wait_for_examine" 00:33:39.610 } 00:33:39.610 ] 00:33:39.610 }, 00:33:39.610 { 00:33:39.610 "subsystem": "nbd", 00:33:39.610 "config": [] 00:33:39.610 } 00:33:39.610 ] 00:33:39.610 }' 00:33:39.610 [2024-06-07 16:42:06.261188] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:33:39.610 [2024-06-07 16:42:06.261243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347821 ] 00:33:39.610 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.610 [2024-06-07 16:42:06.334389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.610 [2024-06-07 16:42:06.387634] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.871 [2024-06-07 16:42:06.529229] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:40.506 16:42:07 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:40.506 16:42:07 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:33:40.506 16:42:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.506 16:42:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:40.506 16:42:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:40.506 16:42:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.506 16:42:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:40.506 16:42:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:40.506 16:42:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.766 16:42:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.766 16:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.766 16:42:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.766 16:42:07 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:40.766 16:42:07 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:40.766 16:42:07 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:40.766 16:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:41.028 16:42:07 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:41.028 16:42:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:41.028 16:42:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.y890W0CZnw /tmp/tmp.jNCS3AQ32R 00:33:41.028 16:42:07 keyring_file -- keyring/file.sh@20 -- # killprocess 3347821 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 3347821 ']' 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@953 -- # kill -0 3347821 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@954 -- # uname 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3347821 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3347821' 00:33:41.028 killing process with pid 3347821 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@968 -- # kill 3347821 00:33:41.028 Received shutdown signal, test time was about 1.000000 seconds 00:33:41.028 00:33:41.028 Latency(us) 00:33:41.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.028 =================================================================================================================== 00:33:41.028 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@973 -- # wait 3347821 00:33:41.028 16:42:07 keyring_file -- keyring/file.sh@21 -- # killprocess 3345950 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 3345950 ']' 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@953 -- # kill -0 3345950 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@954 -- # uname 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:41.028 16:42:07 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3345950 00:33:41.289 16:42:07 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:41.289 16:42:07 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:41.289 16:42:07 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3345950' 00:33:41.289 killing process with pid 3345950 00:33:41.289 16:42:07 keyring_file -- common/autotest_common.sh@968 -- # kill 3345950 00:33:41.289 [2024-06-07 16:42:07.903324] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:41.289 16:42:07 keyring_file -- common/autotest_common.sh@973 -- # wait 3345950 00:33:41.289 00:33:41.289 real 0m11.018s 00:33:41.289 user 0m25.881s 00:33:41.289 sys 0m2.568s 00:33:41.289 16:42:08 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:41.289 16:42:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:41.289 ************************************ 00:33:41.289 END TEST keyring_file 00:33:41.289 ************************************ 00:33:41.550 16:42:08 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:41.550 16:42:08 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:41.550 16:42:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:41.550 16:42:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:41.550 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:33:41.550 ************************************ 00:33:41.550 START TEST keyring_linux 00:33:41.550 ************************************ 00:33:41.550 16:42:08 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:41.550 * Looking for test storage... 00:33:41.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.551 16:42:08 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.551 16:42:08 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.551 16:42:08 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.551 16:42:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.551 16:42:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.551 16:42:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.551 16:42:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:41.551 16:42:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@708 -- # local prefix key digest 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@710 -- # key=00112233445566778899aabbccddeeff 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@710 -- # digest=0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@711 -- # python - 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:41.551 /tmp/:spdk-test:key0 00:33:41.551 16:42:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:41.551 16:42:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@721 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@708 -- # local prefix key digest 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@710 -- # prefix=NVMeTLSkey-1 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@710 -- # key=112233445566778899aabbccddeeff00 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@710 -- # digest=0 00:33:41.551 16:42:08 keyring_linux -- nvmf/common.sh@711 -- # python - 00:33:41.812 16:42:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:41.812 16:42:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:41.812 /tmp/:spdk-test:key1 00:33:41.812 16:42:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3348429 00:33:41.812 16:42:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3348429 00:33:41.812 16:42:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:41.812 16:42:08 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 3348429 ']' 00:33:41.812 16:42:08 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.812 16:42:08 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:41.812 16:42:08 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.812 16:42:08 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:41.812 16:42:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:41.812 [2024-06-07 16:42:08.497433] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:33:41.812 [2024-06-07 16:42:08.497508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348429 ] 00:33:41.812 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.812 [2024-06-07 16:42:08.563555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.812 [2024-06-07 16:42:08.637732] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:33:42.754 16:42:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.754 [2024-06-07 16:42:09.268023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.754 null0 00:33:42.754 [2024-06-07 16:42:09.300066] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:42.754 [2024-06-07 16:42:09.300435] tcp.c: 982:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.754 16:42:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:42.754 796743286 00:33:42.754 16:42:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:42.754 823421709 00:33:42.754 16:42:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3348593 00:33:42.754 16:42:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3348593 /var/tmp/bperf.sock 00:33:42.754 16:42:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 3348593 ']' 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:42.754 16:42:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.754 [2024-06-07 16:42:09.375391] Starting SPDK v24.09-pre git sha1 5a57befde / DPDK 24.03.0 initialization... 00:33:42.754 [2024-06-07 16:42:09.375446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348593 ] 00:33:42.754 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.754 [2024-06-07 16:42:09.449061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.754 [2024-06-07 16:42:09.503070] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.326 16:42:10 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:43.326 16:42:10 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:33:43.326 16:42:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:43.326 16:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:43.587 16:42:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:43.587 16:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:43.848 16:42:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:43.848 16:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:43.848 [2024-06-07 16:42:10.625721] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:43.848 nvme0n1 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:44.108 16:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:44.108 16:42:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:44.108 16:42:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.108 16:42:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:44.108 16:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@25 -- # sn=796743286 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 796743286 == \7\9\6\7\4\3\2\8\6 ]] 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 796743286 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:44.368 16:42:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:44.368 Running I/O for 1 seconds... 00:33:45.310 00:33:45.310 Latency(us) 00:33:45.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.310 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:45.310 nvme0n1 : 1.02 8599.22 33.59 0.00 0.00 14767.76 2826.24 15291.73 00:33:45.310 =================================================================================================================== 00:33:45.310 Total : 8599.22 33.59 0.00 0.00 14767.76 2826.24 15291.73 00:33:45.310 0 00:33:45.310 16:42:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:45.310 16:42:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:45.571 16:42:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:45.571 16:42:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:45.571 16:42:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:45.571 16:42:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:45.571 16:42:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:45.571 16:42:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:45.832 16:42:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:45.832 [2024-06-07 16:42:12.622345] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 429:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:45.832 [2024-06-07 16:42:12.623070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147d3b0 (107): Transport endpoint is not connected 00:33:45.832 [2024-06-07 16:42:12.624066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147d3b0 (9): Bad file descriptor 00:33:45.832 [2024-06-07 16:42:12.625067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.832 [2024-06-07 16:42:12.625074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:45.832 [2024-06-07 16:42:12.625079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.832 request: 00:33:45.832 { 00:33:45.832 "name": "nvme0", 00:33:45.832 "trtype": "tcp", 00:33:45.832 "traddr": "127.0.0.1", 00:33:45.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:45.832 "adrfam": "ipv4", 00:33:45.832 "trsvcid": "4420", 00:33:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:45.832 "psk": ":spdk-test:key1", 00:33:45.832 "method": "bdev_nvme_attach_controller", 00:33:45.832 "req_id": 1 00:33:45.832 } 00:33:45.832 Got JSON-RPC error response 00:33:45.832 response: 00:33:45.832 { 00:33:45.832 "code": -5, 00:33:45.832 "message": "Input/output error" 00:33:45.832 } 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@33 -- # sn=796743286 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 796743286 00:33:45.832 1 links removed 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@33 -- # sn=823421709 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 823421709 00:33:45.832 1 links removed 00:33:45.832 16:42:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3348593 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 3348593 ']' 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 3348593 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:45.832 16:42:12 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3348593 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3348593' 00:33:46.093 killing process with pid 3348593 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@968 -- # kill 3348593 00:33:46.093 Received shutdown signal, test time was about 1.000000 seconds 00:33:46.093 00:33:46.093 Latency(us) 00:33:46.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.093 =================================================================================================================== 00:33:46.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@973 -- # wait 3348593 00:33:46.093 16:42:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3348429 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 3348429 ']' 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 3348429 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3348429 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3348429' 00:33:46.093 killing process with pid 3348429 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@968 -- # kill 3348429 00:33:46.093 16:42:12 keyring_linux -- common/autotest_common.sh@973 -- # wait 3348429 00:33:46.353 00:33:46.353 real 0m4.889s 00:33:46.353 user 0m8.329s 00:33:46.353 sys 0m1.448s 00:33:46.353 16:42:13 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:46.353 16:42:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:46.353 ************************************ 00:33:46.353 END TEST keyring_linux 00:33:46.353 ************************************ 00:33:46.353 16:42:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:46.353 16:42:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:46.353 16:42:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:46.353 16:42:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:46.353 16:42:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:46.353 16:42:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:46.353 16:42:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:46.353 16:42:13 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:46.353 16:42:13 -- common/autotest_common.sh@10 -- # set +x 00:33:46.353 16:42:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:46.353 16:42:13 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:33:46.353 16:42:13 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:33:46.353 16:42:13 -- common/autotest_common.sh@10 -- # set +x 00:33:54.515 INFO: APP EXITING 00:33:54.515 INFO: killing all VMs 00:33:54.515 INFO: killing vhost app 00:33:54.515 WARN: no vhost pid file found 00:33:54.515 INFO: EXIT DONE 00:33:57.063 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:57.063 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:57.063 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:57.325 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:57.325 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:57.585 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:57.585 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:57.585 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:00.886 Cleaning 00:34:00.886 Removing: /var/run/dpdk/spdk0/config 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:00.886 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:00.887 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:00.887 Removing: /var/run/dpdk/spdk1/config 00:34:00.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:01.147 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:01.147 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:01.147 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:01.147 Removing: /var/run/dpdk/spdk2/config 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:01.147 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:01.147 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:01.147 Removing: /var/run/dpdk/spdk3/config 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:01.147 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:01.147 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:01.147 Removing: /var/run/dpdk/spdk4/config 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:01.147 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:01.147 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:01.147 Removing: /dev/shm/bdev_svc_trace.1 00:34:01.147 Removing: /dev/shm/nvmf_trace.0 00:34:01.147 Removing: /dev/shm/spdk_tgt_trace.pid2881340 00:34:01.147 Removing: /var/run/dpdk/spdk0 00:34:01.147 Removing: /var/run/dpdk/spdk1 00:34:01.147 Removing: /var/run/dpdk/spdk2 00:34:01.147 Removing: /var/run/dpdk/spdk3 00:34:01.147 Removing: /var/run/dpdk/spdk4 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2879734 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2881340 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2881918 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2882958 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2883292 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2884386 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2884688 00:34:01.147 Removing: /var/run/dpdk/spdk_pid2884888 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2885941 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2886614 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2886907 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2887181 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2887581 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2887970 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2888322 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2888551 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2888843 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2890233 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2893968 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2894319 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2894681 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2894957 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2895389 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2895420 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2896046 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2896107 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2896469 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2896486 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2896844 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2896852 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2897388 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2897644 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2898039 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2898401 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2898432 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2898514 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2898850 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2899197 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2899548 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2899772 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2899959 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2900285 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2900642 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2900990 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2901198 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2901393 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2901731 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2902083 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2902436 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2902639 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2902832 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2903177 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2903527 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2903879 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2904106 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2904311 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2904655 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2904998 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2909373 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2963020 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2968093 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2979900 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2986279 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2991129 00:34:01.408 Removing: /var/run/dpdk/spdk_pid2991814 00:34:01.408 Removing: /var/run/dpdk/spdk_pid3006125 00:34:01.408 Removing: /var/run/dpdk/spdk_pid3006216 00:34:01.408 Removing: /var/run/dpdk/spdk_pid3007226 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3008261 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3009310 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3010074 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3010079 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3010406 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3010425 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3010427 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3011434 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3012438 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3013496 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3014125 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3014243 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3014483 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3015889 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3017280 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3027291 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3027642 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3032695 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3039424 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3042852 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3055244 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3065899 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3067915 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3069121 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3089389 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3093868 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3125870 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3130945 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3132937 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3135234 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3135305 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3135629 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3135927 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3136495 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3138804 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3140345 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3140862 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3143435 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3144139 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3144900 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3149900 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3160077 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3172990 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3177811 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3185038 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3186524 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3188353 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3194029 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3198746 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3207595 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3207699 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3212548 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3212863 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3213199 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3213560 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3213648 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3219144 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3219741 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3224911 00:34:01.670 Removing: /var/run/dpdk/spdk_pid3228257 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3234633 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3241095 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3251415 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3259953 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3259956 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3282095 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3282897 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3283703 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3284462 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3285427 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3286178 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3286894 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3287590 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3292634 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3292971 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3300106 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3300385 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3303451 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3310546 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3310551 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3316423 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3318642 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3321133 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3322392 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3324846 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3326289 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3336042 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3336636 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3337298 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3340240 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3340762 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3341258 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3345950 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3346138 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3347821 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3348429 00:34:01.933 Removing: /var/run/dpdk/spdk_pid3348593 00:34:01.933 Clean 00:34:01.933 16:42:28 -- common/autotest_common.sh@1450 -- # return 0 00:34:01.933 16:42:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:01.933 16:42:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:01.933 16:42:28 -- common/autotest_common.sh@10 -- # set +x 00:34:02.237 16:42:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:02.237 16:42:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:02.237 16:42:28 -- common/autotest_common.sh@10 -- # set +x 00:34:02.237 16:42:28 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:02.237 16:42:28 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:02.237 16:42:28 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:02.237 16:42:28 -- spdk/autotest.sh@391 -- # hash lcov 00:34:02.237 16:42:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:02.237 16:42:28 -- spdk/autotest.sh@393 -- # hostname 00:34:02.237 16:42:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:02.237 geninfo: WARNING: invalid characters removed from testname! 00:34:28.822 16:42:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:29.082 16:42:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.992 16:42:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:32.375 16:42:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:34.287 16:43:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:35.670 16:43:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:37.051 16:43:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:37.313 16:43:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.313 16:43:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:37.313 16:43:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.313 16:43:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.313 16:43:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.313 16:43:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.313 16:43:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.313 16:43:03 -- paths/export.sh@5 -- $ export PATH 00:34:37.313 16:43:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.313 16:43:03 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:37.313 16:43:03 -- common/autobuild_common.sh@437 -- $ date +%s 00:34:37.313 16:43:03 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717771383.XXXXXX 00:34:37.313 16:43:03 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717771383.Yzv7j7 00:34:37.313 16:43:03 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:34:37.313 16:43:03 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:34:37.313 16:43:03 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:37.313 16:43:03 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:37.313 16:43:03 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:37.313 16:43:03 -- common/autobuild_common.sh@453 -- $ get_config_params 00:34:37.313 16:43:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:37.313 16:43:03 -- common/autotest_common.sh@10 -- $ set +x 00:34:37.313 16:43:03 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:37.313 16:43:03 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:34:37.313 16:43:03 -- pm/common@17 -- $ local monitor 00:34:37.313 16:43:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.313 16:43:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.313 16:43:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.313 16:43:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.313 16:43:03 -- pm/common@21 -- $ date +%s 00:34:37.313 16:43:03 -- pm/common@25 -- $ sleep 1 00:34:37.313 16:43:03 -- pm/common@21 -- $ date +%s 00:34:37.313 16:43:03 -- pm/common@21 -- $ date +%s 00:34:37.313 16:43:03 -- pm/common@21 -- $ date +%s 00:34:37.313 16:43:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717771383 00:34:37.313 16:43:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717771383 00:34:37.313 16:43:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717771383 00:34:37.313 16:43:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717771383 00:34:37.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717771383_collect-cpu-load.pm.log 00:34:37.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717771383_collect-vmstat.pm.log 00:34:37.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717771383_collect-cpu-temp.pm.log 00:34:37.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717771383_collect-bmc-pm.bmc.pm.log 00:34:38.253 16:43:04 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:34:38.253 16:43:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:38.253 16:43:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:38.253 16:43:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:38.253 16:43:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:38.253 16:43:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:38.253 16:43:04 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:38.254 16:43:04 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:38.254 16:43:04 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:38.254 16:43:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:38.254 16:43:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:38.254 16:43:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:38.254 16:43:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:38.254 16:43:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.254 16:43:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:38.254 16:43:05 -- pm/common@44 -- $ pid=3361457 00:34:38.254 16:43:05 -- pm/common@50 -- $ kill -TERM 3361457 00:34:38.254 16:43:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.254 16:43:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:38.254 16:43:05 -- pm/common@44 -- $ pid=3361458 00:34:38.254 16:43:05 -- pm/common@50 -- $ kill -TERM 3361458 00:34:38.254 16:43:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.254 16:43:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:38.254 16:43:05 -- pm/common@44 -- $ pid=3361460 00:34:38.254 16:43:05 -- pm/common@50 -- $ kill -TERM 3361460 00:34:38.254 16:43:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.254 16:43:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:38.254 16:43:05 -- pm/common@44 -- $ pid=3361486 00:34:38.254 16:43:05 -- pm/common@50 -- $ sudo -E kill -TERM 3361486 00:34:38.254 + [[ -n 2761033 ]] 00:34:38.254 + sudo kill 2761033 00:34:38.263 [Pipeline] } 00:34:38.281 [Pipeline] // stage 00:34:38.286 [Pipeline] } 00:34:38.301 [Pipeline] // timeout 00:34:38.307 [Pipeline] } 00:34:38.325 [Pipeline] // catchError 00:34:38.330 [Pipeline] } 00:34:38.346 [Pipeline] // wrap 00:34:38.351 [Pipeline] } 00:34:38.364 [Pipeline] // catchError 00:34:38.372 [Pipeline] stage 00:34:38.373 [Pipeline] { (Epilogue) 00:34:38.386 [Pipeline] catchError 00:34:38.388 [Pipeline] { 00:34:38.402 [Pipeline] echo 00:34:38.403 Cleanup processes 00:34:38.407 [Pipeline] sh 00:34:38.689 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:38.689 3361570 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:38.689 3362007 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:38.703 [Pipeline] sh 00:34:39.041 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:39.041 ++ grep -v 'sudo pgrep' 00:34:39.041 ++ awk '{print $1}' 00:34:39.041 + sudo kill -9 3361570 00:34:39.053 [Pipeline] sh 00:34:39.340 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:51.584 [Pipeline] sh 00:34:51.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:51.873 Artifacts sizes are good 00:34:51.889 [Pipeline] archiveArtifacts 00:34:51.929 Archiving artifacts 00:34:52.125 [Pipeline] sh 00:34:52.413 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:52.431 [Pipeline] cleanWs 00:34:52.443 [WS-CLEANUP] Deleting project workspace... 00:34:52.443 [WS-CLEANUP] Deferred wipeout is used... 00:34:52.450 [WS-CLEANUP] done 00:34:52.452 [Pipeline] } 00:34:52.475 [Pipeline] // catchError 00:34:52.492 [Pipeline] sh 00:34:52.782 + logger -p user.info -t JENKINS-CI 00:34:52.798 [Pipeline] } 00:34:52.820 [Pipeline] // stage 00:34:52.827 [Pipeline] } 00:34:52.848 [Pipeline] // node 00:34:52.855 [Pipeline] End of Pipeline 00:34:52.893 Finished: SUCCESS